Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What are the current plans for reactivating the parallel version of Z3?
Z3 never had extensive support for parallelism. In version 2.x, we included an experimental feature that allowed users to execute several copies in parallel using different configuration options. The different copies could also share information and prune each other search space. This feature had some limitations. For example, it was not available in the programmatic API. It also conflicted with long term research goals and directions. Thus, this feature has been removed from recent versions.
That being said, in the Z3 4.x API, it is safe to create multiple contexts (Z3_Context) and access them concurrently from different threads. The previous versions were not thread safe. In Z3 4.x we can define custom strategies using parallel combinators. For example, the combinator (par-or t1 t2) executes the strategies t1 and t2 in parallel. These combinators are available in the programmatic API and SMT 2.0 front-end. The following online tutorial contains additional information: http://rise4fun.com/Z3/tutorial/strategies
The following command (for the SMT 2.0 front-end) will check the asserted formulas using two copies of the tactic smt with different random seeds.
(check-sat-using (par-or (! smt :random-seed 10) (! smt :random-seed 20)))
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 months ago.
Improve this question
I am using support vector regressor. I want to predict personality as shown in screenshot! Is it possible to predict when y is in string format? I used onehot encoder but its not working.
This is not a regression task, but classification. 'not working' is not very informative, however normally you'd just map classes to integers. Either sklearn.preprocessing.LabelEncoder, sklearn.preprocessing.label_binarize().argmax(axis=1), pandas.factorize() or manual mapping should get the job done.
Worth noting support vector machines don't handle multiclass problems natively, so you may encounter troubles depending on the exact model you use. At least the latest sklearn versions should handle it automatically when using models like sklearn.svm.LinearSVC, building N binary classifiers under the hood.
I'd also recommend getting acquainted with a more elegant way of ensembling SVMs for multiclass problems, using sklearn.multiclass.OutputCodeClassifier().
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I played around with Z3 after reading the excellent tutorial at https://www.rise4fun.com/Z3/tutorial. But now I would like to get an overview over all commands available in Z3's dialect of SMTLIB2.
Unfortunately I only found reference manuals for the different languages bindings, but not for SMTLIB2 itself.
You can read all about SMTLib in http://smtlib.cs.uiowa.edu/
In particular, the document http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf is the "official" document on all SMTLib commands.
For logics, you want to browse: http://smtlib.cs.uiowa.edu/logics.shtml
Now, this document is not Z3 specific. But to a large extent, it captures all the SMT commands/logics supported by Z3, and Z3 is one of the most "compliant" solvers out there in terms of implementing the specs. There are a few differences of course: For instance, the spec never talks about optimization and Z3 does support that, likewise for set operations and a few other "extras." As Malte pointed out the documentation for these are available, but maybe not easy to navigate. My favorite links are:
https://ericpony.github.io/z3py-tutorial/guide-examples.htm (Python specific, but also tons of info on Z3 features.)
Programming in Z3: https://theory.stanford.edu/~nikolaj/programmingz3.html This is a wonderful document detailing how z3 works internally with most of its features demonstrated. Again, it uses Python, but for the most part you can find the corresponding commands in SMTLib more or less directly.
API documentation in various languages: https://z3prover.github.io/api/html/index.html Eventually you'll need these as you get to program z3; but you can keep this as a "reference" only for later use.
If there is a specific piece of info you're looking for that's not covered in one of these documents, then that's what this forum is for! Best of luck..
I'm not aware of any such reference manual, browsing the source code is probably your best option currently.
I might of course be wrong regarding the existence of such a manual, but the fact that questions regarding Z3's SMT-LIB dialect are often asked via the Z3 issue tracker (e.g.#4549, #4536, #4460), suggests that there is no reference manual. The developers' responses also do not hint at any such manual.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I write the site parser on Python (I pull data from the pages, process it, perform various arithmetic operations that are generated with js). I use selenium + pure lxml where it is possible. But I am not happy with the performance.
I want write on the other programming language, more quickly. Only I do not know which one to choose.
Someone writes that Scala does everything, someone says that C++ (not even C), someone for Assembler, someone for Rust, Perl, PHP... In general, I'm confused ... What faster parses a dynamic site?
Assuming the pages being scraped are not in your local network (and maybe even if they are, depending on how they are generated), it's likely that the slowest part of your scrape will be waiting for the page to be sent over the network.
Since you're scraping multiple pages, the simplest way of speeding up the process is to scrape multiple pages in parallel, so that it is not necessary to wait for one page to finish before you start downloading the next one.
Any language which allows parallel processing would work, but even if the language doesn't support it, you could run several scraping processes in parallel using a standard shell.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What do you recommend (or not) to use for Real Time features (like chats or auctions) in web applications?
The most important for me is your opinion or benchmarks about the efficiency / performance / speed of specific frameworks, technologies and solutions.
For example:
Ruby on Rails + ActionCable
Phoenix + Elixir
Socket.io
QUESTION'S CONTEXT:
Each framework, programming language, technology has some advantages and disadvantages which make it more or less effective for Real Time needs. Sometimes we can use multiple technologies to build app's backend, for example when backend is a set of cooperating services (SOA, micorservices, etc.). Due to both, we are able create some features in Ruby on Rails (because the implementation is fast) and other in Java (beacuse it works fast).
If I would be on your side, I would follow Elixir & Phoenix path.
Elixir is basically Erlang with better syntax and it's open for extensions via macros, so you can customize it whatever you want.
Please take a look on these great articles about that:
The road to 2 million websocket connections
Phoenix Channels vs Rails Action Cable
Basically:
Elixir was created to do handle such scenarios with grace, efficiency, low latency, great scalability and fun.
Ps. Please remember that the time of the compilation is not that important as time of handling the request / getting the response / handling multiple websocket connections.
Elixir is not the fasters language, but it leverages concurrency and it's unique in terms of responsiveness.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We just received the stable version of CUDA 5. There are some new terms like Kepler and ability of using MPI with better performance, and running the same card with 32 applications at the same time. I am a bit confused though and looking for the answers of such questions:
Which cards and compute capabilities are required to fully utilize CUDA 5's features?
Are new features only available for Kepler architecture, like GPUDirect, Dynamic Parallelism, Hyper Q and Dynamic Parallelism.
If we have Fermi architectures, what are the benefits of using CUDA 5. Does it bring benefits other than ability of using NSight at Linux and Eclipse. I think the most important feature is ability of building libraries?
Did you see any performance improvements by just passing from CUDA 4 to CUDA 5. (I got some speed ups at Linux machines)
I found out some documents like
http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/Kepler_Compatibility_Guide.pdf
http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf
http://blog.cuvilib.com/2012/03/28/nvidia-cuda-kepler-vs-fermi-architecture/
However a better, short description may make our minds clearer.
PS: Please do not limit the answer to the questions above. I might be missing some similar questions.
Compute capability 3.5 (GK110, for example) is required for dynamic parallelism because earlier GPUs do not have the hardware required for threads to launch kernels or directly inject other API calls into the hardware command queue.
Compute capability 3.5 is required for Hyper-Q.
SHFL intrinsics require CC 3.0 (GK104)
Device code linking, NSight EE, nvprof, performance improvements and bug fixes in CUDA 5 benefit Fermi and earlier GPUs.