I've been trying to understand the Julia from a meta-programming viewpoint and often I find myself in the position where I wish to generate the user facing Julia syntax from an Expr.
Searching through the source code on GitHub I came across the "deparse" function defined in femtolisp. But it doesn't seem to be exposed at all.
What are the ways I can generate a proper Julia expression using just the internal representation?
P. S. There ought to be some sort of prettifying tool for the generated Julia code, do you know of some such (un/registered) pkg?
~#~#~#~#~
UPDATE
I've stored all the Meta.show_sexprof a julia source file into a different file.
# This function is identical to create_adder implementation above.
function create_adder(x)
y -> x + y
end
# You can also name the internal function, if you want
function create_adder(x)
function adder(y)
x + y
end
adder
end
add_10 = create_adder(10)
add_10(3) # => 13
is converted to
(:line, 473, :none),
(:function, (:call, :create_adder, :x), (:block,
(:line, 474, :none),
(:function, (:call, :adder, :y), (:block,
(:line, 475, :none),
(:call, :+, :x, :y)
)),
(:line, 477, :none),
:adder
)),
(:line, 480, :none),
(:(=), :add_10, (:call, :create_adder, 10)),
(:line, 481, :none),
(:call, :add_10, 3))
Now, Wish to evaluate these in julia.
Here's an example of a function that takes an "s_expression" in tuple form, and generates the corresponding Expr object:
"""rxpe_esrap: parse expr in reverse :p """
function rpxe_esrap(S_expr::Tuple)
return Expr( Tuple( isa(i, Tuple) ? rpxe_esrap(i) : i for i in S_expr )... );
end
Demo
Let's generate a nice s_expression tuple to test our function.
(Unfortunately Meta.show_sexpr doesn't generate a string, it just prints to an IOStream, so to get its output as a string that we can parse / eval, we either need to get it from a file, or print straight into something like an IOBuffer)
B = IOBuffer(); # will use to 'capture' the s_expr in
Expr1 = :(1 + 2 * 3); # the expr we want to generate an s_expr for
Meta.show_sexpr(B, Expr1); # push s_expr into buffer B
seek(B, 0); # 'rewind' buffer
SExprStr = read(B, String); # get buffer contents as string
close(B); # please to be closink after finished, da?
SExpr = parse(SExprStr) |> eval; # final s_expr in tuple form
resulting in the following s_expression:
julia> SExpr
(:call, :+, 1, (:call, :*, 2, 3))
Now let's test our function:
julia> rpxe_esrap(SExpr)
:(1 + 2 * 3) # Success!
Notes:
1. This is just a bare-bones function to demonstrate the concept, obviously this would need appropriate sanity checks if to be used on serious projects.
2. This implementation just takes a single "s_expr tuple" argument; your example shows a string that corresponds to a sequence of tuples, but presumably you could tokenise such a string first to obtain the individual tuple arguments, and run the function on each one separately.
3. The usual warnings regarding parse / eval and scope apply. Also, if you wanted to pass the s_expr string itself as the function argument, rather than an "s_expr tuple", then you could modify this function to move the parse / eval step inside the function. This may be a better choice, since you can check what the string contains before evaluating potentially dangerous code, etc etc.
4. I'm not saying there isn't an official function that does this. Though if there is one, I'm not aware of it. This was fun to write though.
Related
I am trying to write a function that produces a string of a fixed length. There are two cases I think I need to consider:
The string is too long and must be cropped
The string is too short and must be padded with whitespace
To do this, I have written the following:
foo = "testtesttesttest"
bar = "test"
function fixed_width_a(s)
return string.format("%-6s", string.sub(s, 1, 6))
end
print(fixed_width_a(foo))
print(fixed_width_a(bar))
-- testte
-- test__ (Using underscores to denote spaces)
While I don't know if this is the best way, it works. Great!
Now, I'd like to be able to specify the width of the string as a parameter. For example,
function fixed_width_b(s, w)
w = w or 6
return string.format("%-ws", string.sub(s, 1, w))
end
Of course, this naive attempt doesn't work because "%-ws" isn't parsed correctly. Another attempt might be to specify,
string.format("%-{%d}s", w, string.sub(s, 1, w))
but obviously this makes no sense either.
Question: how do I specify formatting options using a variable in string.format?
print(("%-_._s"):gsub("_", 6):format("teststring"))
print(("%-"+6+"."+"s"):format("teststring"))
print(("%%-%d.%ds"):format(6,6):format("teststring"))
Okay, so this seems to work okay, although I don't know if it's the best option.
function fixed_width_b(s, w)
w = w or 6
tmp = string.format("%%-%ds", w)
return string.format(tmp, string.sub(s, 1, w))
end
Here, I construct the first argument to the crop/pad part of the code as a string called tmp. If w = 5, for example, then tmp = "%-5s".
Edit
For brevity, I suppose I could just write,
function fixed_width_b(s, w)
w = w or 6
return string.format(string.format("%%-%ds", w), string.sub(s, 1, w))
end
and avoid an intermediate variable.
For a large model, the function model() used through the Z3 Python API truncates the output (at some point the model is continued with "...").
Is there a way to avoid this?
If I remember correctly, this is a Python "feature". Instead of str(...) try using repr(...), which should produce a string that can be read back (if needed) by the interpreter. You can of course iterate over the model elements separately, so as to make all the strings needing to be output smaller. For instance, along these lines:
s = Solver()
# Add constraints...
print(s.check())
m = s.model()
for k in m:
print('%s=%s' % (k, m[k]))
I have the function from below which tries to save into fileName the answer given by str(self.solver.check()) and the model given by str(self.solver.model())
def createSMT2LIBFileSolution(self, fileName):
with open(fileName, 'w+') as foo:
foo.write(str(self.solver.check()))
foo.write(str(self.solver.model()))
foo.close()
The output of the problem is:
sat[C5_VM1 = 0,
... //these "..." are added by me
VM6Type = 6,
ProcProv11 = 18,
VM2Type = 5,
...]
The "...]" appear at line 130 in all the files which are truncated. I don't know if is a Python or Z3 thing. If the model can be written on less than 130 lines everything is fine.
As per my knowledge, since z3 doesn't recognize transcendental functions its throwing me an error while conversion using following code.
def convertor(f, status="unknown", name="benchmark", logic=""):
v = (Ast * 0)()
if isinstance(f, Solver):
a = f.assertions()
if len(a) == 0:
f = BoolVal(True)
else:
f = And(*a)
return Z3_benchmark_to_smtlib_string(f.ctx_ref(), name, logic, status, "", 0, v, f.as_ast())
pi, EI, kA , kB, N = Reals('pi EI kA kB N')
s= Solver()
s.add(pi == 3.1415926525)
s.add(EI == 175.2481)
s.add(kA>= 0)
s.add(kA<= 100)
s.add(kB>= 0)
s.add(kB<= 100)
s.add(N>= 100)
s.add(N<= 200)
please change the path of the input file "beamfinv3.bch", which can be found at: link
continue_read=False
input_file = open('/home/mani/downloads/new_z3/beamfinv3.bch', 'r')
for line in input_file:
if line.strip()=="Constraints":
continue_read=True
continue
if line.strip()=="end":
continue_read=False
if continue_read==True:
parts = line.split(';')
if (parts[0]!="end"):
#print parts[0]
s.add(eval(parts[0]))
input_file.close()
file=open('cyber.smt2','w')
result=convertor(s, logic="None")
file.write (result)
error:
File "<string>", line 1, in <module>
NameError: name 'sin' is not defined
Any way out? or help?
Thanks.
The core of this problem is that eval tries to execute a Python script, i.e., all functions that occur within parts[0] must have a corresponding Python function of the same name, which is not the case for the trigonometric functions (the are neither in the Python API nor the C API, the former being based on the latter). For now you could try to add those functions yourself, perhaps with an implementation based on parse_smt2_string, or perhaps by replacing the Python strings with SMT2 strings altogether.
Z3 can represent expressions containing trigonometric functions, but it will refuse to do so when the logic is set to something; see arith_decl_plugin. I don't know Python well enough, but it might have to be None instead of "".
While Z3 can represent these expressions, it's probably not very good at solving them. See comments on the limitations in Can Z3 handle sinusoidal and exponential functions, Z3 supports for nonlinear arithmetics, and Z3 Performance with Non-Linear Arithmetic.
Note: This is a sequel to my previous question about powersets.
I have got a nice Ruby solution to my previous question about generating a powerset of a set without a need to keep a stack:
class Array
def powerset
return to_enum(:powerset) unless block_given?
1.upto(self.size) do |n|
self.combination(n).each{|i| yield i}
end
end
end
# demo
['a', 'b', 'c'].powerset{|item| p item} # items are generated one at a time
ps = [1, 2, 3, 4].powerset # no block, so you'll get an enumerator
10.times.map{ ps.next } # 10.times without a block is also an enumerator
It does the job and works nice.
However, I would like to try to rewrite the same solution in Erlang, because for the {|item| p item} block I have a big portion of working code already written in Erlang (it does some stuff with each generated subset).
Although I have some experience with Erlang (I have read all 2 books :)), I am pretty confused with the example and the comments that sepp2k kindly gave me to my previous question about powersets. I do not understand the last line of the example - the only thing that I know is that is is a list comprehension. I do not understand how I can modify it to be able to do something with each generated subset (then throw it out and continue with the next subset).
How can I rewrite this Ruby iterative powerset generation in Erlang? Or maybe the provided Erlang example already almost suits the need?
All the given examples have O(2^N) memory complexity, because they return whole result (the first example). Two last examples use regular recursion so that stack raises. Below code which is modification and compilation of the examples will do what you want:
subsets(Lst) ->
N = length(Lst),
Max = trunc(math:pow(2, N)),
subsets(Lst, 0, N, Max).
subsets(Lst, I, N, Max) when I < Max ->
_Subset = [lists:nth(Pos+1, Lst) || Pos <- lists:seq(0, N-1), I band (1 bsl Pos) =/= 0],
% perform some actions on particular subset
subsets(Lst, I+1, N, Max);
subsets(_, _, _, _) ->
done.
In the above snippet tail recursion is used which is optimized by Erlang compiler and converted to simple iteration under the covers. Recursion may be optimized this way only if recursive call is the last one within function execution flow. Note also that each generated Subset may be used in place of the comment and it will be forgotten (garbage collected) just after that. Thanks to that neither stack nor heap won't grow, but you also have to perform operation on subsets one after another as there's no final result containing all of them.
My code uses same names for analogous variables like in the examples to make it easier to compare both of them. I'm sure it could be refined for performance, but I only want to show the idea.
In the interpreter you can just write the name of an object e.g. a list a = [1, 2, 3, u"hellö"] at the interpreter prompt like this:
>>> a
[1, 2, 3, u'hell\xf6']
or you can do:
>>> print a
[1, 2, 3, u'hell\xf6']
which seems equivalent for lists. At the moment I am working with hdf5 to manage some data and I realized that there is a difference between the two methods mentioned above. Given:
with tables.openFile("tutorial.h5", mode = "w", title = "Some Title") as h5file:
group = h5file.createGroup("/", 'node', 'Node information')
tables.table = h5file.createTable(group, 'readout', Node, "Readout example")
The output of
print h5file
differs from
>>> h5file
So I was wondering if someone could explain Python's behavioral differences in these two cases?
Typing an object into the terminal calls __repr__(), which is for a detailed representation of the object you are printing (unambiguous). When you tell something to 'print', you are calling __str__() and therefore asking for something human readable.
Alex Martelli gave a great explanation here. Other responses in the thread might also illuminate the difference.
For example, take a look at datetime objects.
>>> import datetime
>>> now = datetime.datetime.now()
Compare...
>>> now
Out: datetime.datetime(2011, 8, 18, 15, 10, 29, 827606)
to...
>>> print now
Out: 2011-08-18 15:10:29.827606
Hopefully that makes it a little more clear!
The interactive interpreter will print the result of each expression typed into it. (Since statements do not evaluate, but rather execute, this printing behavior does not apply to statements such as print itself, loops, etc.)
Proof that repr() is used by the interactive interpreter as stated by Niklas Rosenstein (using a 2.6 interpreter):
>>> class Foo:
... def __repr__(self):
... return 'repr Foo'
... def __str__(self):
... return 'str Foo'
...
>>> x = Foo()
>>> x
repr Foo
>>> print x
str Foo
So while the print statement may be unnecessary in the interactive interpreter (unless you need str and not repr), the non-interactive interpreter does not do this. Placing the above code in a file and running the file will result in nothing being printed.
The print statement always calls x.__str__() method while (only in the interactive interpeter) simply calling a variable the objects x.__repr__() method ia called.
>>> '\x02agh'
'\x02agh'
>>> print '\x02agh'
'agh'
Look at Python documentation at: http://docs.python.org/reference/datamodel.html#object.repr
object.repr(self)
Called by the repr() built-in function and by string conversions
(reverse quotes) to compute the “official” string representation of an
object. If at all possible, this should look like a valid Python
expression that could be used to recreate an object with the same
value (given an appropriate environment). If this is not possible, a
string of the form <...some useful description...> should be returned.
The return value must be a string object. If a class defines
repr() but not str(), then repr() is also used when an
“informal” string representation of instances of that class is
required.
This is typically used for debugging, so it is important that the
representation is information-rich and unambiguous.
object.str(self)
Called by the str() built-in function and by the print statement
to compute the “informal” string representation of an object. This
differs from repr() in that it does not have to be a valid Python
expression: a more convenient or concise representation may be used
instead. The return value must be a string object.
Example:
>>> class A():
... def __repr__(self): return "repr!"
... def __str__(self): return "str!"
...
>>> a = A()
>>> a
repr!
>>> print(a)
str!
>>> class B():
... def __repr__(self): return "repr!"
...
>>> class C():
... def __str__(self): return "str!"
...
>>> b = B()
>>> b
repr!
>>> print(b)
repr!
>>> c = C()
>>> c
<__main__.C object at 0x7f7162efb590>
>>> print(c)
str!
Print function prints the console every arguments __str__. Like print(str(obj)).
But in interactive console, it print function's return value's __repr__. And if __str__ not defined, __repr__ could be used instead.
Ideally, __repr__ means, we should just use that representation to reproduce that object. It shouldn't be identical between different classes, or object's that represents different values, For example, datetime.time:
But __str__ (what we get from str(obj)) should seem nice, because we show it to user.
>>> a = datetime.time(16, 42, 3)
>>> repr(a)
'datetime.time(16, 42, 3)'
>>> str(a)
'16:42:03' #We dont know what it is could be just print:
>>> print("'16:42:03'")
'16:42:03'
And, sorry for bad English :).
print(variable) equals to print(str(variable))
whereas
variable equals to print(repr(variable))
Obviously, the __repr__ and __str__ method of the object h5file produce different results.