Calculating factorial on FORTRAN with integer variables. Memory overflow - memory

I'm doing a program with FORTRAN that is a bit special. I can only use integer variables, and as you know with these you've got a memory overflow when you try to calculate a factorial superior to 12 or 13. So I made this program to avoid this problem:
http://lendricheolfiles.webs.com/codigo.txt
But something very strange is happening. The program calculates the factorial well 4 or 5 times and then gives a memory overflow message. I'm using Windows 8 and I fear it might be the cause of the failure, or if it's just that I've done something wrong.
Thanks.

Try compiling with run-time subscript checking. In Fortran segmentation faults are generally caused either by subscript errors or by mismatches between actual and dummy arguments (i.e., between arguments in the call to a procedure and the arguments as declared in the procedure). I'll make a wild guess from glancing at your code that you have have a subscript error -- let the compiler find it for you by turning on run-time subscript checking. Most Fortran compilers have this as an compilation option.
P.S. You can also do calculations like this by using already written packages, e.g., the arbitrary precision arithmetic software of David Bailey, et al., available in Fortran 90 at http://crd-legacy.lbl.gov/~dhbailey/mpdist/

M.S.B.'s answer has the gist of your problem: your array indices go out of bounds at a couple of places.
In three loops, cifra - 1 == 0 is out of bounds:
do cifra=ncifras,1,-1
factor(1,cifra-1) = factor(1,cifra)/10 ! factor is (1:2, 1:ncifras)
factor(1,cifra) = mod(factor(1,cifra),10)
enddo
! :
! Same here:
do cifra=ncifras,1,-1
factor(2,cifra-1) = factor(2,cifra)/10
factor(2,cifra) = mod(factor(2,cifra),10)
enddo
!:
do cifra=ncifras,1,-1
sumaprovisional(cifra-1) = sumaprovisional(cifra-1)+(sumaprovisional(cifra)/10)
sumaprovisional(cifra) = mod(sumaprovisional(cifra),10)
enddo
In the next case, the value of cifra - (fila - 1) goes out of bounds:
do fila=1,nfilas
do cifra=1,ncifras
! Out of bounds for all cifra < fila:
sumando(fila,cifra-(fila-1)) = factor(1,cifra)*factor(2,ncifras-(fila-1))
enddo
sumaprovisional = sumaprovisional+sumando(fila,:)
enddo
You should be fine if you rewrite the first three loops as do cifra = ncifras, 2, -1 and the inner loop of the other case as do cifra = fila, ncifras. Also, in the example program you posted, you first have to allocate resultado properly before passing it to the subroutine.

Related

Creating an array with operands results in nil members

I just started learning lua and run into a strange problem. The following code...
local xx = 100
vertices0 = {xx, xx}
vertices1 = {xx−5, xx-5}
results in...
an array containing (100,100) for vertices0 (as expected) but
in an array containing (nil, 95)for vertices1.
I really dont understand what is causing the nil to appear. I expected to get an array with (95,95).
I checked the documentation and tried to google the problem. But was not able to solve this problem.
Btw - I'm using love2d but "regular" lua seeems to cause the same behaviour.
xx−5 is not using - but −, lua treats − as part of a identifier so xx−5 is a separate identifier rather than the desired subtraction operation xx - 5
local xx = 100
local xx−5 = 100
vertices0 = {xx, xx}
vertices1 = {xx−5, xx-5}
print(vertices1[1])
This appears to work in 5.1, but not later version of lua. additionally an issue like this can be seen easier if you place a space around an operator and it's operands, which does tend to be a good style choice for readability.
vertices1 = {xx − 5, xx - 5}
Also if you have syntax highlighting than you can notice the improper char does not get highlighted properly.

Is there an option to make LuaJIT do bounds checking?

LuaJIT knows the C types it defines, and the lengths of the arrays, but it doesn't check the bounds:
ffi = require("ffi")
ten_ints = ffi.typeof("int [10]")
p1 = ten_ints()
print(ffi.sizeof(p1)) -- 40
var_ints = ffi.typeof("int [?]")
p2 = ffi.new(var_ints, 10)
print(ffi.sizeof(p2)) -- 40
p1[1000000] = 1 -- segfault
p2[1000000] = 1 -- segfault
Is there a way to make it do that, or is my only choice to write wrappers?
Short answer: There is no way, you'll have to write/find your own wrapper.
Here is the explanation from luajit.org
No Hand-holding!
[...] The FFI library provides no memory safety, unlike regular
Lua code. It will happily allow you to dereference a NULL pointer, to
access arrays out of bounds or to misdeclare C functions. If you make
a mistake, your application might crash, just like equivalent C code
would. This behavior is inevitable, since the goal is to provide full
interoperability with C code. Adding extra safety measures, like
bounds checks, would be futile. [...] Likewise there's no way to
infer the valid range of indexes for a returned pointer. Again: the
FFI library is a low-level library.

E2099 Overflow in conversion or arithmetic operation

I want to compare an int64 with a variable like this:
const GB=1073741824;
if DiskFile.Size< 1*GB then
It works with 1 but not with 3:
if DiskFile.Size< 3*GB then
This post (Strange Delphi integer multiplication behavior) explains why. I agree with that explanation. The result of 2*GB cannot fit in 'integer'. What I don't understand is why the compiler chooses integer instead the int64? As in the case of:
if DiskFile.Size< 3073741824 then <--------- almost 3GB
that works.
There is any way to write the last line of code in the 3*GB style (using constants) BUT without defining a new constant for 1GB, 2GB, 3GB, 4GB, etc ?
The first thing to be clear of here is that the integer overflow occurs in the compiler. The compiler has to evaluate your expression because it is a constant expression and they are evaluated by the compiler.
The documentation is a little sparse (and I am being kind here) on how the compiler treats your expression. We can infer, at least empirically, that the compiler attempts to perform 3*GB in a signed integer context. That is clear from the error message.
You need to force the compiler to evaluate the expression in an Int64 context. A cast will force that:
if DiskFile.Size< Int64(3)*GB then
....
Another option is to make the constant have type Int64:
const
GB = Int64(1073741824);
Although I think I'd write it like this:
const
KB = Int64(1024);
MB = 1024*KB;
GB = 1024*MB;
So long as GB is a 64 bit type then you can revert to:
if DiskFile.Size < 3*GB then
....
I'd like to elaborate on my second paragraph above. How can we tell that the compiler performs the arithmetic in 32 bit signed integer context? The following program suggests that this is so:
{$APPTYPE CONSOLE}
const
C1 = 715827882; // MaxInt div 3
C2 = C1+1;
begin
Writeln(3*C1);
Writeln(3*C2);
Readln;
end.
The first expression, 3*C1 compiles, the second fails with E2099. The first expression does not overflow a signed 32 bit integer, the second does.
When looking at the documentation, it is unclear whether the true constant 1073741824 should be of type Integer or Cardinal. The compiler could choose either. It seems that the compiler, when presented with a choice between signed and unsigned types, chooses signed types.
But then one might imagine that the following program would behave in the same way, but with Smallint and Word taking the place of Integer and Cardinal:
{$APPTYPE CONSOLE}
const
C1 = 10922; // high(Smallint) div 3
C2 = C1+1;
begin
Writeln(3*C1);
Writeln(3*C2);
Readln;
end.
But no, this program compiles. So, at this point I am giving up on the documentation which appears to bear little relationship to the actual behaviour of the compiler.
My best guess is that a integral true constant is handled as follows:
If it is within the range of Integer, it is of type Integer.
Otherwise, if it is within the range of Cardinal, it is of type Cardinal.
Otherwise, if it is within the range of Int64, it is of type Int64.
Otherwise, if it is within the range of UInt64, it is of type UInt64.
Otherwise it is a compiler error.
Of course, all of this assumes that the compilers rules for evaluating constant expressions follow the same rules as the rest of the language. I'm not certain that is the case.

how to get the available memory on the device

I'm trying to obtain how much free memory I have on the device. To do this I call the cuda function cuMemGetInfo from a fortran code, but it returns negative values for the free amount of memory, so there's clearly something wrong.
Does anyone know how I can do that?
Thanks
EDIT:
Sorry, in fact my question was not very clear. I'm using OpenACC in Fortran and I call the C++ cuda function cudaMemGetInfo. Finally I could fix the code, the problem was effectively the kind of variables that I was using. Switching to size_ fixed everything. This is the interface in fortran that I'm using:
interface
subroutine get_dev_mem(total,free) bind(C,name="get_dev_mem")
use iso_c_binding
integer(kind=c_size_t)::total,free
end subroutine get_dev_mem
end interface
and this the cuda code
#include <cuda.h>
#include <cuda_runtime.h>
extern "C" {
void get_dev_mem(size_t& total, size_t& free)
{
cuMemGetInfo(&free, &total);
}
}
There's one last question: I pushed an array on the gpu and I checked its size using cuMemGetInfo, then I computed it's size counting the number of bytes, but I don't have the same answer, why? In the first case it is 3052mb large, in the latter 3051mb. This difference of 1mb could be the size of the array descriptor? Here there's the code that I used:
integer, parameter:: long = selected_int_kind(12)
integer(kind=c_size_t) :: total, free1,free2
real(8), dimension(:),allocatable::a
integer(kind=long)::N, eight, four
allocate(a(four*N))
!some OpenACC stuff in order to init the gpu
call get_dev_mem(total,free1)
!$acc data copy(a)
call get_dev_mem(total,free2)
print *,"size a in the gpu = ",(free1-free2)/1024/1024, " mb"
print *,"size a in theory = ", (eight*four*N)/1024/1024, " mb"
!$acc end data
deallocate(a)
Right, so, like commenters have suggested, we're not sure exactly what you're running, but filling in the missing details by guessing, here's a shot:
Most CUDA API calls return a status code (or error code if you will); this is true both in C/C++ and in Fortran, as we can see in the Portland Group's CUDA Fortran Manual:
Most of the runtime API routines are integer functions that return an error code; they return a value of zero if the call was successful, and a nonzero value if there was an error. To interpret the error codes, refer to “Error Handling,” on page 48.
This is the case for cudaMemGetInfo() specifically:
integer function cudaMemGetInfo( free, total )
integer(kind=cuda_count_kind) :: free, total
The two integers for free and total are cuda_count_kind, which if I am not mistaken are effectively unsigned... anyway, I would guess that what you're getting is an error code. Have a look at the Error Handling section on page 48 of the manual.

Why F# compiler gets into twist with seq{0L..-5L..-10L}?

I'm having a bit of trouble declaring a descending sequence of int64.
What I want is this:
seq{0L..-5L..-10L};;
However, I get an error:
seq{0L..-5L..-10L};;
---^^^^^^^^^^^^^^^
stdin(5,4): error FS0739: Invalid object, sequence or record expression
Interestingly, it works with plain int:
> seq{0..-5..-10};;
val it : seq<int> = seq [0; -5; -10]
Even more interestingly, if I put spaces between .., it starts working with int64 too:
> seq{0L .. -5L .. -10L};;
val it : seq<int64> = seq [0L; -5L; -10L]
Can someone explain why the compiler gets into the twist with seq{0L..-5L..-10L}?
I agree that this is a bit odd behavior. It is generally recommended (although this is not strictly required by the specification) to write spaces around .. and it works correctly in that case. So I'd recommend using:
seq { 0 .. -5 .. -10 }
seq { 0L .. -5L .. -10L }
Why is this behaving differently for int and int64? You may notice that when you write 1..-2 and 1L..-2, Visual Studio colorizes the text differently (in the first case .. has the same color as numbers, in the other case, it has the same color as .. with spaces).
The problem is that when the compiler sees 1., it may mean a floating point value (1.0) or it may be a start of 1.., so this case is handled specially. For 1L., this is not a problem - 1L. has to be the beginning of 1L...
So, if you write 1..-5..-10, the compiler uses the special handling and generates a sequence. If you write 1L..-5..-10, then the compiler parses ..- as a unary operator that is applied to 5L. Writing the spaces resolves the ambiguity between unary operator and .. followed by a negative number.
For reference, here is a screenshot from my Visual Studio (which shows 10.. in green, but .. on the second line in yellow - not particularly noticeable difference, but they are different :-))

Resources