I have a rule that I expect to be reused by a variety of modules. I figured, let's turn that into a function, have the modules pass their input into a function and use a set comprehension like approach, but I'm running into the "functions must not produce multiple outputs for same inputs" error.
Here's a contrived example of what I want to accomplish. I'm thinking I'm going about this the wrong way and there's another approach to this type of problem in Rego.
Classic generator:
arr = [1,2,3,4]
result[entry] {
itm := arr[i]
r := itm % 2
r == 0
entry := { "type": "even", "val": itm }
}
result[entry] {
itm := arr[i]
r := itm % 2
r == 1
entry := { "type": "odd", "val": itm }
}
This works as expected.
"result": [
{
"type": "even",
"val": 2
},
{
"type": "even",
"val": 4
},
{
"type": "odd",
"val": 1
},
{
"type": "odd",
"val": 3
}
]
Here's the function approach that will trigger this error. I am passing that t_label variable to give the function some argument, but it's not really important.
f(t_label) := q {
q := [ entry | itm := arr[i]
r := itm % 2
r == 0
entry := { t_label: "even", "val": itm }
]
}
f(t_label) := q {
q := [ entry | itm := arr[i]
r := itm % 2
r == 1
entry := { t_label: "odd", "val": itm }
]
}
Is this a thing that is done? How is this problem generally approached using Rego?
You're right — unlike rules, functions can't be partial. If you really need something like that for a function you could either have a function to aggregate the result of two (or more) other function calls:
even(t_label) := {entry |
itm := arr[_]
itm % 2 == 0
entry := {t_label: "even", "val": itm}
}
odd(t_label) := {entry |
itm := arr[_]
itm % 2 == 1
entry := {t_label: "odd", "val": itm}
}
f(t_label) := even(t_label) | odd(t_label)
both := f("foo")
For your particlar example though, I think we'd be excused using a little "or hack" using map based branching:
f(t_label) := {entry |
itm := arr[_]
entry := {t_label: {0: "even", 1: "odd"}[itm % 2], "val": itm}
}
I'm having problems transforming equations, again...
Setting up the functions b(a) and c(b) works. Inserting them into each other also works to get from a temperature to a current c(b(a)). But now I want to flip it around a(c). The result should be like this a(c):= (c-(4`mA))*(25`degC)/(4`mA); But it's not working even with the ''-trick.
(%i1) load(ezunits);
(%o1) "C:/maxima-5.44.0/share/maxima/5.44.0/share/ezunits/ezunits.mac"
(%i7) a0: 0`degC;
am: 100`degC;
b0: 0`mV;
bm: 4`mV;
c0: 4`mA;
cm: 20`mA;
(a0) 0 ` degC
(am) 100 ` degC
(b0) 0 ` mV
(bm) 4 ` mV
(c0) 4 ` mA
(cm) 20 ` mA
(%i8) b(a):= (bm-b0)/(am-a0)*(a-a0)+b0;
(%o8) b(a):=(bm-b0)/(am-a0)*(a-a0)+b0
(%i9) c(b):= (cm-c0)/(bm-b0)*(b-b0)+c0;
(%o9) c(b):=(cm-c0)/(bm-b0)*(b-b0)+c0
(%i10) c(b(50`degC));
(%o10) 12 ` mA
(%i11) a(c):= dimensionally(solve(c(b(T)), T));
(%o11) a(c):=dimensionally(solve(c(b(T)),T))
(%i12) a(12`mA);
(%o12) [T=(-25) ` degC]
(%i13) a(c):= ''(dimensionally(solve(c(b(T)), T)));
(%o13) a(c):=[T=(-25) ` degC]
(%i14) a(12`mA);
(%o14) [T=(-25) ` degC]
(%i15) oi: T, dimensionally(solve(c(b(T)), T));;
(oi) (-25) ` degC
(%i16) a(c):= (c-(4`mA))*(25`degC)/(4`mA);
(%o16) a(c):=((c-4 ` mA)*(25 ` degC))/4 ` mA
(%i17) a(12`mA);
(%o17) 50 ` degC
-->
It looks like you have omitted the specific value of c from solve(c(b(T)), T) -- what I mean is you need something like solve(c(b(T)) = c1, T) where c1 is the input value such as 12 ` mA.
This definition seems to work --
a(c1):= dimensionally(solve(c(b(T)) = c1, T));
Then I get
(%i22) a(12`mA);
(%o22) [T = 50 ` degC]
When you omit the ... = c1, you are effectively solving for ... = 0, that's why you get T = (- 25) ` degC.
The other variation a(c1) := ''(...) should also work, although I didn't try it.
You can write a(c) := dimensionally(solve(c(b(T)) = c, T)), i.e., using the same name for the variable c and the function c, but it's easy to get mixed up, and also I am hoping the change that behavior in the near future (with the implementation of lexical scope of symbols) which will make that not work anymore.
I have the following functions:
P[t_] := P[t] = P[t-1] +a*ED[t-1];
ED[t_] := ED[t] = DF[t] + DC[t];
DF[t_] := DF[t] = b (F - P[t]);
DC[t_] := DC[t] = c (P[t] - F);
And the following parameters:
a=1;
c=0.2;
b = 0.75;
F=100;
In Mathematica I use the function "ListLinePlot" in order to plot P[t] and F:
ListLinePlot[{Table[P[t], {t, 0, 25}], Table[F, {t, 0, 25}]}, PlotStyle → {Black, Red},Frame → True, FrameLabel → {"time", "price"}, AspectRatio → 0.4, PlotRange → All]
How can I do this in wxMaxima? Is there a similar function or an alternative to ListLinePlot?
This is my attempt in wxMaxima:
P[t] := P[t-1] + a * ED[t-1];
ED[t] := DF[t] + DC[t];
DF[t] := b*[F-P[t]];
DC[t] := c*[P[t]-F];
a=1;
c=0.2;
b=0.75;
F=100;
And then I tried:
draw2d(points(P[t], [t,0,25]))
The plotted function should look like this:
OK, I've adapted the code you showed above. This works for me. I'm working with Maxima 5.44 on macOS.
P[t] := P[t-1] + a * ED[t-1];
ED[t] := DF[t] + DC[t];
DF[t] := b*(F-P[t]);
DC[t] := c*(P[t]-F);
a:1;
c:0.2;
b:0.75;
F:100;
P[0]: F + 1;
Pt_list: makelist (P[t], t, 0, 25);
load (draw);
set_draw_defaults (terminal = qt);
draw2d (points_joined = true, points(Pt_list));
Notes. (1) There needs to be a base case for the recursion on P. I put P[0]: F + 1. (2) Assignments are : instead of =. Note that x = y is a symbolic equation instead of an assignment. (3) Square brackets [ ] are only for subscripts and lists. Use parentheses ( ) for grouping expressions. (4) Syntax for draw2d is a little different, I fixed it up. (I put a default for terminal since the built-in value is incorrect for Maxima on macOS; if you are working on Linux or Windows, you can omit that.)
EDIT: Try this to draw a horizontal line as well.
draw2d (points_joined = true, points(Pt_list),
color = red, points([[0, F], [25, F]]),
yrange = [F - 1, P[0] + 1]);
I'm still doing practical with llvm-c Api,
I have a doubt about this code:
I got the original code from source.
In Delphi is:
procedure test;
(*int A[1024];
int main(){
int B[1024];
A[50] = A[49] + 5;
B[0] = B[1] + 10;
return 0;
}
*)
var
context : TLLVMContextRef ;
module : TLLVMModuleRef;
builder : TLLVMBuilderRef;
typeA,
typeB,
mainFnReturnType : TLLVMTypeRef;
arrayA,
arrayB,
mainFn,
Zero64,
temp,
temp2,
returnVal,
ptr_A_49,
ptr_B_1,
elem_A_49 ,
elem_B_1,
ptr_A_50,
ptr_B_0 : TLLVMValueRef;
entryBlock,
endBasicBlock : TLLVMBasicBlockRef;
indices : array[0..1] of TLLVMValueRef;
begin
context := LLVMGetGlobalContext;
module := LLVMModuleCreateWithNameInContext('meu_modulo.bc', context);
builder := LLVMCreateBuilderInContext(context);
//
// Declara o tipo do retorno da função main.
mainFnReturnType := LLVMInt64TypeInContext(context);
// Cria a função main.
mainFn := LLVMAddFunction(module, 'main', LLVMFunctionType(mainFnReturnType, nil, 0, False));
// Declara o bloco de entrada.
entryBlock := LLVMAppendBasicBlockInContext(context, mainFn, 'entry');
// Declara o bloco de saída.
endBasicBlock := LLVMAppendBasicBlock(mainFn, 'end');
// Adiciona o bloco de entrada.
LLVMPositionBuilderAtEnd(builder, entryBlock);
// Cria um valor zero para colocar no retorno.
Zero64 := LLVMConstInt(LLVMInt64Type(), 0, false);
// Cria o valor de retorno e inicializa com zero.
returnVal := LLVMBuildAlloca(builder, LLVMInt64Type, 'retorno');
LLVMBuildStore(builder, Zero64, returnVal);
//
// Array global de 1024 elementos.
typeA := LLVMArrayType(LLVMInt64Type, 1024);
arrayA := LLVMBuildArrayAlloca(builder, typeA, LLVMConstInt(LLVMInt64Type, 0, false), 'A'); //LLVMAddGlobal (module, typeA, 'A');
LLVMSetAlignment(arrayA, 16);
// Array local de 1024 elementos.
typeB := LLVMArrayType(LLVMInt64Type(), 1024);
arrayB := LLVMBuildArrayAlloca(builder, typeB, LLVMConstInt(LLVMInt64Type, 0, false), 'B');
LLVMSetAlignment(arrayB, 16);
// A[50] = A[49] + 5;
// Na documentação diz para usar dois indices, o primeiro em zero: http://releases.llvm.org/2.3/docs/GetElementPtr.html#extra_index
// The first index, i64 0 is required to step over the global variable %MyStruct. Since the first argument to the GEP instruction must always be a value of pointer type, the first index steps through that pointer. A value of 0 means 0 elements offset from that pointer.
indices[0] := LLVMConstInt(LLVMInt32Type, 0, false);
indices[1] := LLVMConstInt(LLVMInt32Type, 49, false);
ptr_A_49 := LLVMBuildInBoundsGEP(builder, arrayA, #indices[0], 2, 'ptr_A_49"');
TFile.WriteAllText('Func.II',LLVMDumpValueToStr(mainFn));
elem_A_49 := LLVMBuildLoad(builder, ptr_A_49, 'elem_of_A');
temp := LLVMBuildAdd(builder, elem_A_49, LLVMConstInt(LLVMInt64Type(), 5, false), 'temp');
indices[0] := LLVMConstInt(LLVMInt32Type(), 0, false);
indices[1] := LLVMConstInt(LLVMInt32Type(), 50, false);
ptr_A_50 := LLVMBuildInBoundsGEP(builder, arrayA, #indices[0], 2, 'ptr_A_50');
LLVMBuildStore(builder, temp, ptr_A_50);
//
// B[0] = B[1] + 10;
indices[0] := LLVMConstInt(LLVMInt32Type, 0, false);
indices[1] := LLVMConstInt(LLVMInt32Type, 1, false);
ptr_B_1 := LLVMBuildInBoundsGEP(builder, arrayB, #indices[0], 2, 'ptr_B_1');
elem_B_1:= LLVMBuildLoad(builder, ptr_B_1, 'elem_of_B');
temp2 := LLVMBuildAdd(builder, elem_B_1, LLVMConstInt(LLVMInt64Type(), 10, false), 'temp2');
indices[0] := LLVMConstInt(LLVMInt32Type, 0, false);
indices[1] := LLVMConstInt(LLVMInt32Type, 0, false);
ptr_B_0 := LLVMBuildInBoundsGEP(builder, arrayB, #indices[0], 2, 'ptr_B_0');
LLVMBuildStore(builder, temp2, ptr_B_0);
//
// Cria um salto para o bloco de saída.
LLVMBuildBr(builder, endBasicBlock);
// Adiciona o bloco de saída.
LLVMPositionBuilderAtEnd(builder, endBasicBlock);
// Cria o return.
LLVMBuildRet(builder, LLVMBuildLoad(builder, returnVal, ''));
// Imprime o código do módulo.
//LLVMDumpModule(module);
TFile.WriteAllText('Func.II',LLVMDumpValueToStr(mainFn));
// Escreve para um arquivo no formato bitcode.
if (LLVMWriteBitcodeToFile(module, 'meu_modulo.bc').ResultCode <> 0) then
raise Exception.Create('error writing bitcode to file, skipping');
end;
the problem is here, if arrayA is a global variable:
// Array global de 1024 elementos.
typeA: = LLVMArrayType (LLVMInt64Type, 1024);
arrayA: = LLVMAddGlobal (module, typeA, 'A');
LLVMSetAlignment (arrayA, 16);
....
....
ptr_A_49: = LLVMBuildInBoundsGEP (builder, arrayA, #indices [0], 2, 'ptr_A_49 "');
TFile.WriteAllText ( 'Func.II', LLVMDumpValueToStr (mainFn));
the gep instruction is not transferred to the code, in fact the output is
define i64 #main () {
entry:
% retorno = allocates i64
i64 0 store, i64 *% return
% B = allocates [1024 x i64], i64 0, align 16
end:; No predecessors!
}
if ArrayA is a local variable:
// Array global de 1024 elementos.
typeA: = LLVMArrayType (LLVMInt64Type, 1024);
arrayA: = LLVMBuildArrayAlloca (builder, typeA, LLVMConstInt (LLVMInt64Type, 0, false), 'A'); // LLVMAddGlobal (module, typeA, 'A');
LLVMSetAlignment (arrayA, 16);
.....
....
ptr_A_49: = LLVMBuildInBoundsGEP (builder, arrayA, #indices [0], 2, 'ptr_A_49 "');
TFile.WriteAllText ( 'Func.II', LLVMDumpValueToStr (mainFn));
the gep instruction is transferred to the code, in fact the output is:
define i64 #main () {
entry:
% retorno = allocates i64
i64 0 store, i64 *% return
% A = allocates [1024 x i64], i64 0, align 16
% B = allocates [1024 x i64], i64 0, align 16
% "ptr_A_49 22" = getelementptr inbounds [1024 x i64], [1024 x i64] *% A, i32 0, i32 49
end:; No predecessors!
}
why?
Reply from llvm-dev mainling list
With 'A' being a global variable, the GEP becomes a ConstantExpr
(GetElementPtrConstantExpr instead of GetElementPtrInst) since all of
its arguments are constant. ConstantExpr are "free-floating", i.e. not
int a BasicBlock's instruction list and therefore only appear in the
printout when used.
Michael
Other user
LLVM has roughly[1] two kinds of Value: Constants and Instructions.
Constants are things like literal constants, (addresses of) global
variables, and various expressions based just on those things; they're
designed to be values that can be directly calculated by the compiler
and/or linker without any CPU instructions actually being executed[2].
Instructions on the other hand sit inside Functions as real entities,
they produce %whatever Values and, unless optimized away, will be
turned into real CPU instructions in the end.
So, you were asking for "GEP something, 0, 49". If that "something" is
a Constant (e.g. a GlobalVariable) then that GEP only depends on
Constants so it can be a ConstantExpr too, written
"getelementptr([1024 x i64], [1024 x i64]* #var, i32 0, i32 49)". That
Constant is then not inserted into a block (it's not an instruction so
it can't be). Instead it's written directly in any instruction that
uses it, so if you actually use the GEP you might see something like:
%val = load i64, i64* getelementptr([1024 x i64], [1024 x i64]*
#var, i32 0, i32 49)
Until you use it, it's not actually in the function anywhere though.
You just have a handle when needed.
On the other hand if the "something" is a local variable, then the GEP
needs to be an actual instruction inside a function and the API you're
using will insert it automatically.
In the Constant case, you can manually create an instruction anyway,
at least in C++. I'm afraid I haven't used the C API and couldn't see
an obvious way there, but you probably don't want to since
optimization would quickly undo it and turn it back into a Constant.
Cheers.
Tim.
[1] There's also Arguments, representing function parameters. They
behave like Instructions for these purposes.
[2] But you can build pathological Constants that no linker really
could calculate like 4 * #global. That tends to result in a compiler
error.
I have recently been running some numerical codes written in Go on large datasets and have been encountering memory management issues. While attempting to profile the problem, I have measured the memory usage of my program in three different ways: with Go's runtime/pprof package, with the unix time utility, and by manually adding up the size of the data that I allocated. These three methods do not give me consistent results.
Below is a simplified version of the code that I am profiling. It allocates several slices, puts values at every index and places each of them inside of a parent slice:
package main
import (
"fmt"
"os"
"runtime/pprof"
"unsafe"
"flag"
)
var mprof = flag.String("mprof", "", "write memory profile to this file")
func main() {
flag.Parse()
N := 1<<15
psSlice := make([][]int64, N)
_ = psSlice
size := 0
for i := 0; i < N; i++ {
ps := make([]int64, 1<<10)
for i := range ps { ps[i] = int64(i) }
psSlice[i] = ps
size += int(unsafe.Sizeof(ps[0])) * len(ps)
}
if *mprof != "" {
f, err := os.Create(*mprof)
if err != nil { panic(err) }
pprof.WriteHeapProfile(f)
f.Close()
}
fmt.Printf("total allocated: %d MB\n", size >> 20)
}
Running this with the command $ time time -f "%M kB" ./mem_test -mprof=out.mprof results in the output:
total allocated: 256 MB
1141216 kB
real 0m0.150s
user 0m0.031s
sys 0m0.113s
Here the first number, 256 MB, is just the size of the arrays computed from unsafe.Sizeof and the second number, 1055 MB, is what time reports. Running the pprof tool results in
(pprof) top1
Total: 108.2 MB
107.8 99.5% 99.5% 107.8 99.5% main.main
These results scale smoothly in the way you would expect them to for slices of smaller or larger lengths.
Why don't these three number line up more closely?
First, you need to provide an error free example. Let's start with the basic numbers. For example,
package main
import (
"fmt"
"runtime"
"unsafe"
)
func WriteMatrix(nm [][]int64) {
for n := range nm {
for m := range nm[n] {
nm[n][m]++
}
}
}
func NewMatrix(n, m int) [][]int64 {
a := make([]int64, n*m)
nm := make([][]int64, n)
lo, hi := 0, m
for i := range nm {
nm[i] = a[lo:hi:hi]
lo, hi = hi, hi+m
}
return nm
}
func MatrixSize(nm [][]int64) int64 {
size := int64(0)
for i := range nm {
size += int64(unsafe.Sizeof(nm[i]))
for j := range nm[i] {
size += int64(unsafe.Sizeof(nm[i][j]))
}
}
return size
}
var nm [][]int64
func main() {
n, m := 1<<15, 1<<10
var ms1, ms2 runtime.MemStats
runtime.ReadMemStats(&ms1)
nm = NewMatrix(n, m)
WriteMatrix(nm)
runtime.ReadMemStats(&ms2)
fmt.Println(runtime.GOARCH, runtime.GOOS)
fmt.Println("Actual: ", ms2.TotalAlloc-ms1.TotalAlloc)
fmt.Println("Estimate:", n*3*8+n*m*8)
fmt.Println("Total: ", ms2.TotalAlloc)
fmt.Println("Size: ", MatrixSize(nm))
// check top VIRT and RES for COMMAND peter
for {
WriteMatrix(nm)
}
}
Output:
$ go build peter.go && /usr/bin/time -f "%M KiB" ./peter
amd64 linux
Actual: 269221888
Estimate: 269221888
Total: 269240592
Size: 269221888
^C
Command exited with non-zero status 2
265220 KiB
$
$ top
VIRT 284268 RES 265136 COMMAND peter
Is this what you expected?
See MatrixSize for the correct way to calculate the memory size.
In the infinite loop that allows us to use the top command, pin the matrix as resident by updating it.
What results do you get when you run this program?
BUG:
Your result from /usr/bin/time is 1056992 KiB which too large by a factor of four. It's a bug in your version of /usr/bin/time, ru_maxrss is reported in KBytes not pages. My version of Ubuntu has been patched.
References:
Re: GNU time: incorrect results
time-1.7 counts rusage wrong on Linux
GNU Project Archives: time
“time” 1.7-24 source package in Ubuntu. ru_maxrss is reported in KBytes not pages. (Closes: #649402)
#649402 - [PATCH] time overestimates max RSS by a factor of 4 - Debian Bug report logs
Subject: Fix ru_maxrss reporting Author: Richard Kettlewell
Bug-Debian: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=649402
--- time-1.7.orig/time.c
+++ time-1.7/time.c
## -392,7 +398,7 ##
ptok ((UL) resp->ru.ru_ixrss) / MSEC_TO_TICKS (v));
break;
case 'M': /* Maximum resident set size. */
- fprintf (fp, "%lu", ptok ((UL) resp->ru.ru_maxrss));
+ fprintf (fp, "%lu", (UL) resp->ru.ru_maxrss);
break;
case 'O': /* Outputs. */
fprintf (fp, "%ld", resp->ru.ru_oublock);