How to configure mono to use more than 4G memory? - f#

I want to run an .NET executable that needs more than 4G RAM on OSX 10.9. I had Xamarin Studio installed but AFAIKT Xamarin doesn't come with a 64-bit mono build, so I decided to make a custom 64-bit mono with "with-large-heap=yes" configuration, and install it in a different location.
git clone https://github.com/mono/mono
cd mono
./configure --prefix=<my-local-dir> --enable-nls=no --with-large-heap=yes
make
make install
(I also built a 64-bit F# and installed in my-local-dir, following "Option 3" in this page.)
However, when I use the 64-bit mono to run the executable (an F# program built with the canonical "fsharpc" in Xamarin), it still crashes with System.OuOtfMemory exception. I tried this:
export PATH=$PATH:<my-local-dir>/bin
MONO_GC_PARAMS=max-heap-size=5g <my-local-dir>/bin/mono <my-executable>
And it gives a warning
Warning: In environment variable `MONO_GC_PARAMS': `max-heap-size` must be an integer.
(this error message is a bit misleading, I think it really means 5g is too large and not supported, because it doesn't complain if I put a "3g" there). And the program still crash with the same exception at the point when it exceeds the memory.
Did I miss any thing important? How do I configure mono to have more than 4G heap size?

You are still running the 32 bit version of mono (check your PATH env var). This also explains the parsing of 5g for max-heap-size (it will work correctly with 64 bit mono).
The default as with your configure command above is to install in /usr/local/bin, so just run your programs with /usr/local/bin/mono program.exe.

Related

Unable to Load Dart SDK on Raspberry Pi Zero W

I'm trying to get the Dart SDK on a Pi Zero W.
When I download the SDK archive, extract it, and put it in the /usr/lib folder manually, I get segmentation faults when I try to run any of the command line tools. I reflashed the memory card (32GB, so plenty large) from scratch from an x64 machine and pre-loaded the SDK as well to help ensure that there wasn't any funky Pi file corruption and got the same result.
Though I was sure it wouldn't work, I loaded the ARM7 version of the SDK, and got executable file format incompatibility errors that were not surprising.
I downloaded the .deb package, and got a warning that the file was not meant for my Pi and that I might break things, so I didn't try to install it.
I used the apt-get instructions from the Dart website and that failed with the error "E: Unable to locate package dart" which seems to indicate that I had the incorrect name for the package (note: I copied and pasted it directly from the Dart website). I tried to look through the repository contents, and assuming that I looked at the correct file, there were not any Dart entries in it, so the error is not surprising.
My Linux competence is suspect, so I could use any ideas. I'd prefer not to build the SDK from scratch as, in my experience, open source build instructions almost always assume that the user needs to know/do something that is not explicitly listed in the instructions, so that tends to be a 2-hour effort that ultimately fails (pretty sure I'm not the only one who's had that experience).
Thoughts, anyone?
That is not going to work. Your problem is that "Pi Zero W" is a "1GHz single-core ARMv6 CPU (BCM2835)" CPU which means it can only execute programs for the ARMv6 architecture or lower.
Dart does have a minimum requirement for ARMv7 since they removed support for ARMv6 early this year: https://github.com/dart-lang/sdk/issues/42069
The support was never that great for ARMv6 (I did have an old Raspberry Pi) and programs was running really slow with missing support for FFI. So my recommendation is to get a board which supports ARMv7 or ARMv8 (ARM64) which works really great.

NVIDIA Nsight waring: OpenACC injection initialization failed. Is the PGI runtime version greater than 15.7?

I am trying to venture into accelerating my Fortran 2003 programs with OpenACC directives on my Ubuntu 18.04. workstation with Nvidia GeForce RTX 2070 card. To that end, I have installed Nvidia HPC-SDK version 20.7 which should comes with compilers I need (Fortran 2003 from Portland Group and Nvidia (both are version 20.7-0)) as well as profilers (nvprof and Nvidia Nsight Sytems (2020.3.1)).
After a few post-installation glitches, and owing mostly to the help from Robert Cravella (https://stackoverflow.com/users/1695960/robert-crovella) and Mat Colgrove (https://stackoverflow.com/users/3204484/mat-colgrove) I managed to get things going which made me very happy.
My workflow looks like this:
Compile my program:
pgfortran -acc -Minfo=accel -o my_program ./my_program.f90
I run it through profiler:
nsys profile ./my_program
And then import into nsight-sys with File -> Open and chose report1.qdrep
I believe this to be a proper workflow. However, while opening the report file, nsight-sys gives me the warning: "OpenACC injection initialization failed. Is the PGI runtime version greater than 15.7?" That's quite unfortunate, because I use OpenACC to accelerate my programs.
I am not quite sure what PGI runtime is, nor would I know how to check it or change it? I assume it is something with Portland Group (compiler), but I use the suite compilers shipped with Nvidia's HPC-SDK, so I wouldn't expect incompatibilities with the profiler tools shipped in the same package.
Is it an option, or possible at all, to update the PGI runtime thing?
And advice, please?
Cheers
Same answer as your previous post. There's a know issue with Nsight-Systems version 2020.3 which may sometimes cause an injection error when profiling OpenACC. I've been told that this was fixed in version 2020.4, hence the work around would be download and install 2020.4 or use a prior release.
https://developer.nvidia.com/nsight-systems
Version 2020.3 is what we shipped with the NVHPC 20.7 SDK. I'm not sure we have enough time to update to 2020.4 in our upcoming 20.9 release, but if not, we'll bundle it in a later release.
Thanks Mat,
In the meanwhile I managed to have everything running. I did as follows:
First installed CUDA toolkit, which came with the latest driver for my Nvidia RTX 2070 card, 11.1 to be precise. It needed a reboot, but that's OK. For CUDA toolkit to work, I had to set LD_LIBRARY_PATH to its libraries.
Then I installed Nvidia HPC-SDK, which I needed for Fortran 2003 compiler.
HPC-SDK is built for CUDA version 11.0 and comes with its own libraries and LD_LIBRARY_PATH should point to its libraries different from CUDA toolkit.
But, I kept the LD_LIBRARY_PATH to point to CUDA toolkit ones, and then compilers and profilers work in perfect harmony :-)
Thanks again, you and Robert helped me big time to get things running.

Develop for stm32 on beaglebone

is it possible to compile stm32 code on beaglebone (possibly black)?
As it seems platform has to have access to arm-none-eabi-gcc to be able to compile for stm32?
Basically, yes you can.
In order to compile code for stm32 family of mcus you would need a cross-compiler. if you run linux on your beaglebone board you can simply download premade toolchain for your distribution. if you dont find any, you would just build a compiler from source by specifying host and target.
https://wiki.osdev.org/GCC_Cross-Compiler#Preparing_for_the_build this article will help you build a gcc cross compiler for every supported target and host. it takes less then 15 mins.
Few things to note here, once you build your cross compiler, you wont have any kind of libc shiped with it. so get one on github (there are few). your options would be libc or libc-nano (which talks for it self). And from there you will be able to compile code for your stm32 mcu.
For example i ran ubuntu server on my beaglebone black, so in order to compile for stm32 on that, i installed armv7-A gcc compiler, so i can compile for beaglebone it self. then i downloaded source at https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm/downloads compile the source and you will have an official arm toolchain for microcontrollers.

Is Clang as (or more) portable than gcc for C++?

Suppose I have a C++ project, and I compile it with gcc and with clang. You can assume that the gcc compiled version runs in another linux machine. Will this imply (in normal circumstances) that the clang version will also run on the other linux machine?
Clang binraries are as portable as gcc binaries are, as long as you are linking to the same libraries and you aren't passing flags like -march=native to the compiler.
Clang has one huge advantage over gcc, it can deal with alsmost all libstdc++ versions,
while gcc is bound to its bundled version and often can't parse any older versions.
So the following often happens in production environments:
Install an LTS distro (Ubuntu 12.04 for example)
Keep gcc, glibc and libstdc++ untouched
Install a recent clang version for C++11, etc
Build the release binaries with clang
So (in my specific example) those binaries will work on all
distros with libstdc++ >= 4.6 and glibc >= 2.15.
This may be an interesting read for you.
If the program is a simple Hello world, it should work on the other machine when compiled through Clang.
But when the program is a real program with a lot a lines and compilation units, and calls to many external libs everything is possible depending on the program itself and the compilation options :
hardware requirements (memory) being different (mainly depends on compilation options)
use of different (versions of) libraries between gcc and clang
UB giving expected results in one and not in the other
different usages for implementation defined rules
use of gcc extensions not accepted by clang
For all of the above except 2 first, it should run on other machines it it runs on one
linux programs depend on their build environment. If your glibc version or kernel is different there will be lots of possibilities that the executable will not be able to run. You could use the interpreter language of llvm though, it compiles into bytecode which can be interpreted on various operating systems.
The answer is, well, depends.
The first hard requirement is the same CPU architecture. 64 Bit is not enough of a qualifier. If you compile of x64 you won't have much success running it on 64-Bit ARM.
The next big one is libraries. If you use any libraries in the program, the target system needs to have those libraries. This includes the kernel headers. So if you compile for e.g. a current kernel version, using the most cutting-edge features, then you will have no joy running that program on a very old version of Linux.
The last one is hardware dependencies. If you create a program that e.g. requires 4 GB of RAM and then try to run it on a small embedded device with 256 MB RAM, that won't work either.
To fit better to your changed question: From my experience there shouldn't be much of a difference in portability between Clang and gcc. Also googling didn't turn up anything, so it should basically work. But better always test stuff like that before you publish some binary in production.

Delphi 7 Exe is not working in Non-delphi machine

Recently i got a chance to work on delphi 7. I just created a sample application which display a welcome message and that exe is working fine on Delphi machine. if i moved that exe to non-delphi machine(where delphi is not installed), it is throwing error as "The program can't start beause rtl70.bpl is missing from your computer. Try reinstalling the program to fix the problem".
if i follow the same process with Delphi 5, it is working fine.
You have built the program to rely on runtime packages. That means that each machine that needs to run the program needs to have the runtime packages available.
There are two solutions:
Distribute the runtime packages that you use alongside the executable.
Disable runtime packages and so build an executable that contains the runtime.
The runtime packages options are determined by settings specified in the project options.
Unless you have some compelling reason to use runtime packages, the second option is much simpler because it allows the executable file to stand alone, with no external dependencies.

Resources