This smt2 script encounters segmentation fault. The problem arises when par-or tactic is used. It is working without this tactics.
I am using z3 4.4.0 on a "Debian 3.2.60-1+deb7u3 x86_64 GNU/Linux" machine.
Thanks for reporting this issue! There was indeed a bug in the model translation, which is now fixed in the unstable branch (as of this).
Related
The jaxon elixir package crashes systematically on only one of my two macs even though they are running the same version of MacOS, XCode, clang, erlang and elixir.
Here is the more detailed bug report
How can I investigate this, or what other dependencies can I check to try to resolve this?
Your CPU could not handle the instructions it tries to execute.
Check the CPU model of your "MacPro 5,1", and whether it can handle AVX2 or other optimizations of the compiler.
I am trying to venture into accelerating my Fortran 2003 programs with OpenACC directives on my Ubuntu 18.04. workstation with Nvidia GeForce RTX 2070 card. To that end, I have installed Nvidia HPC-SDK version 20.7 which should comes with compilers I need (Fortran 2003 from Portland Group and Nvidia (both are version 20.7-0)) as well as profilers (nvprof and Nvidia Nsight Sytems (2020.3.1)).
After a few post-installation glitches, and owing mostly to the help from Robert Cravella (https://stackoverflow.com/users/1695960/robert-crovella) and Mat Colgrove (https://stackoverflow.com/users/3204484/mat-colgrove) I managed to get things going which made me very happy.
My workflow looks like this:
Compile my program:
pgfortran -acc -Minfo=accel -o my_program ./my_program.f90
I run it through profiler:
nsys profile ./my_program
And then import into nsight-sys with File -> Open and chose report1.qdrep
I believe this to be a proper workflow. However, while opening the report file, nsight-sys gives me the warning: "OpenACC injection initialization failed. Is the PGI runtime version greater than 15.7?" That's quite unfortunate, because I use OpenACC to accelerate my programs.
I am not quite sure what PGI runtime is, nor would I know how to check it or change it? I assume it is something with Portland Group (compiler), but I use the suite compilers shipped with Nvidia's HPC-SDK, so I wouldn't expect incompatibilities with the profiler tools shipped in the same package.
Is it an option, or possible at all, to update the PGI runtime thing?
And advice, please?
Cheers
Same answer as your previous post. There's a know issue with Nsight-Systems version 2020.3 which may sometimes cause an injection error when profiling OpenACC. I've been told that this was fixed in version 2020.4, hence the work around would be download and install 2020.4 or use a prior release.
https://developer.nvidia.com/nsight-systems
Version 2020.3 is what we shipped with the NVHPC 20.7 SDK. I'm not sure we have enough time to update to 2020.4 in our upcoming 20.9 release, but if not, we'll bundle it in a later release.
Thanks Mat,
In the meanwhile I managed to have everything running. I did as follows:
First installed CUDA toolkit, which came with the latest driver for my Nvidia RTX 2070 card, 11.1 to be precise. It needed a reboot, but that's OK. For CUDA toolkit to work, I had to set LD_LIBRARY_PATH to its libraries.
Then I installed Nvidia HPC-SDK, which I needed for Fortran 2003 compiler.
HPC-SDK is built for CUDA version 11.0 and comes with its own libraries and LD_LIBRARY_PATH should point to its libraries different from CUDA toolkit.
But, I kept the LD_LIBRARY_PATH to point to CUDA toolkit ones, and then compilers and profilers work in perfect harmony :-)
Thanks again, you and Robert helped me big time to get things running.
I'm trying to debug some problems with a luajit based application under 64bit Linux.
The script calls back into a number of libraries via FFI and I'm trying to figure out some memory corruption issues. Valgrind is normally my tool of choice for this, however I've found that luaL_newstate fails when run under Valgrind.
After some poking around I found that standard luajit also fails to load. I think this might be more a problem with valgrind than luajit. Is there a way of giving valgrind more memory?
If you are using a version of valgrind starting with version 3.9.0 and up to (but not including) version 3.11.0 that is likely the problem.
Those versions appear to be "broken" on x86_64 with luajit.
See this email from Mike Pall:
[Note that Valgrind 3.9.0 has chosen to block MAP_32BIT, which
breaks LuaJIT on Linux/x64:
https://bugs.kde.org/show_bug.cgi?id=324181
Older versions of Valgrind still work. It wasn't that useful to
debug LuaJIT on x64, anyway, due to the mandatory use of the
builtin allocator. But none of that relates to your problem.]
I am having some hard time getting the Boogie and Z3 to install on Windows 7 with Z3 version 4.3.2 from codeplex.
I've tried to install it like spec# by register.cmd command in prompt, but it doesnt work.
Can anyone tell me how to do it ?
Boogie does not support Z3 version 4.3. You need version 4.1 (see the Installation page on boogie.codeplex.com, which I just updated).
By the way, the register.cmd thing was purely a Spec# thing.
I have a problem installing sikuli on fedora 16 its a 64 bits architecture, my main problem comes with the compilation of opencv, sikuli requires the 2.1 version of opencv and in the cmake step it tells me it can not find some libraries. I have tried installing the libraries and creating symbolic links but it still will not work, does anybody knows a good tutorial or some good ideas to solve my problem?
I'd recommend you to raise a question on Sikuli bug tracker https://answers.launchpad.net/sikuli because it's quicker to get the response from developers there.
Here is a question similar to your issue: https://answers.launchpad.net/sikuli/+question/198796