Printing to specific (runtime-determined) precision in Julia - printing

Given a value x and an integer n (assigned at runtime), I want to print x to exactly n digits after the decimal (after rounding if needed).
print(round(x, n)) works fine for (x,n)=(3.141592, 3) but for (x,n)=(2.5,5), it prints just 2.5, not 2.50000 (5 digits after decimal point).
If I knew n at runtime, say 5, I could do
#printf("%.5f", x)
But #printf being a macro needs n to be known at compile time.
Is this possible using some show magic or something else?

Using the fresh new Format.jl package:
using Format
function foo(x, n)
f = FormatSpec(".$(n)f")
pyfmt(f, x)
end
foo(2.5, 5)

Unfortunately, for some reason the julia version of #printf / #sprintf do not support the "width" sub-specifier as per the c printf standard (see man 3 printf).
If you're feeling brave, you can rely on the c sprintf which supports the "dynamic width" modifier, to collect a string that you then just print as normal.
A = Vector{UInt8}(100); # initialise array of 100 "chars"
ccall( :sprintf, Int32, (Ptr{UInt8}, Cstring, Int64, Float64), A, "%.*f", 4, 0.1 )
print( unsafe_string(pointer(A)) ) #> 0.1000
Note the asterisk in %.*f, and the extra input 4 serving as the dynamic width modifier.

Related

2^65 modulo 101 incorrect anwser

This code checks that the value a maps uniquely for the values 1 to 100 using the formula (a^x) % 101
local function f(a)
found = {}
bijective = true
for x = 1, 100 do
value = (a^x) % 101
if found[value] then
bijective = false
break
else
found[value] = x
end
end
return bijective
end
However does not produce the expected result.
it maps 2^65 % 101 to 56, which matches the value produced by 2^12 % 101 and I get a false result, however the correct value for 2^65 % 101 is 57 and 2 actually should produce all unique values resulting in a true result.
The error described above is specifically on Lua 5.1, is this just a quirk of Lua's number typing? Is there a way to make this function work correctly in 5.1?
The error described above is specifically on Lua 5.1, is this just a quirk of Lua's number typing? Is there a way to make this function work correctly in 5.1?
First of all, this is not an issue with Lua's number typing since 2^65, being a (rather small) power of two, can be represented exactly by the double precision since it uses an exponent-mantissa representation. The mantissa can simply be set to all zeroes (leading one is implicit) and the exponent must be set to 65 (+ offset).
I tried this on different Lua versions and PUC Lua 5.1 & 5.2 as well as LuaJIT have the issue; Lua 5.3 (and presumably later versions as well) are fine. Interestingly, using math.fmod(2^65, 101) returns the correct result on the older Lua versions but 2^65 % 101 does not (it returns 0 instead).
This surprised me so I dug in the Lua 5.1 sources. This is the implementation of math.fmod:
#include <math.h>
...
static int math_fmod (lua_State *L) {
lua_pushnumber(L, fmod(luaL_checknumber(L, 1), luaL_checknumber(L, 2)));
return 1;
}
this also is the only place where fmod from math.h appears to be used. The % operator on the other hand is implemented as documented in the reference manual:
#define luai_nummod(a,b) ((a) - floor((a)/(b))*(b))
in src/luaconf.h. You could trivially redefine it as fmod(a,b) to fix your issue. In fact Lua 5.4 does something similar and even provides an elaborate explanation in its sources!
/*
** modulo: defined as 'a - floor(a/b)*b'; the direct computation
** using this definition has several problems with rounding errors,
** so it is better to use 'fmod'. 'fmod' gives the result of
** 'a - trunc(a/b)*b', and therefore must be corrected when
** 'trunc(a/b) ~= floor(a/b)'. That happens when the division has a
** non-integer negative result: non-integer result is equivalent to
** a non-zero remainder 'm'; negative result is equivalent to 'a' and
** 'b' with different signs, or 'm' and 'b' with different signs
** (as the result 'm' of 'fmod' has the same sign of 'a').
*/
#if !defined(luai_nummod)
#define luai_nummod(L,a,b,m) \
{ (void)L; (m) = l_mathop(fmod)(a,b); \
if (((m) > 0) ? (b) < 0 : ((m) < 0 && (b) > 0)) (m) += (b); }
#endif
Is there a way to make this function work correctly in 5.1?
Yes: The easy way is to use fmod. This may work for these particular numbers since they still fit in doubles due to the base being 2 and the exponent being moderately small, but it won't work in the general case. The better approach is to leverage modular arithmetics to keep your intermediate results small, never storing numbers significantly larger than 101^2 since (a * b) % c == (a % c) * (b % c).
local function f(a)
found = {}
bijective = true
local value = 1
for _ = 1, 100 do
value = (value * a) % 101 -- a^x % 101
if found[value] then
bijective = false
break
else
found[value] = x
end
end
return bijective
end

Need advice on how to print a matrix in lisp

I have a matrix defined so if I do this
(format t "~a" (get-real-2d 0 0))
it prints out the element in the first row first column
and if I do this
(format t "~a" (get-real-2d a 0 1))
it prints out the element in first row second column
and if I do this
(format t "~a" (get-real-2d a 1 0))
it prints out the element in second row first column.
The matrix a looks like this
a =
((0 1 2)
(3 4 5)
(6 7 8))
and I was hoping you can show me exactly how to write a dotimes loop or other loop
that would in as few lines as possible would print out the matrix using the get-real-2d function so the output looks like this:
0 1 2
3 4 5
6 7 8
I'm just hoping you can show me a slick loop that would be real small that I can use to print matrices that I can use in my lisp library something real professional looking, like one that would use only variables. Something like:
(format t "~a" (get-real-2d i j))
instead of a bunch of:
(format t "~a" (get-real-2d 0 0))
(format t "~a" (get-real-2d 0 1))
(format t "~a" (get-real-2d 0 2))
;;;;LATEST EDIT;;;
to make this simple I call
(defparameter a (create-mat 3 3 +32fc1+))
to create a 3x3 matrix - create-mat is a wrapper for opencv's cvCreateMat
the output from that command at repl is
(defparameter a (create-mat 3 3 +32fc1+))
A
CL-OPENCV> a
#.(SB-SYS:INT-SAP #X7FFFD8000E00)
i/e the variable a is a pointer to the 3x3 matrix
then I run
(defparameter data (cffi:foreign-alloc :float :initial-contents
'(0.0f0 1.0f0 2.0f0 3.0f0 4.0f0 5.0f0 6.0f0 7.0f0 8.0f0)))
to create the data for the matrix - which I next will allocate to the matrix
the output from that command at repl is
CL-OPENCV> (defparameter data (cffi:foreign-alloc :float :initial-contents
'(0.0f0 1.0f0 2.0f0 3.0f0 4.0f0 5.0f0 6.0f0 7.0f0 8.0f0)))
DATA
CL-OPENCV> data
#.(SB-SYS:INT-SAP #X7FFFD8000E40)
i/e the variable a is data pointer to the data ill add to the matrix
then I call..
(set-data a data 12) to add the data to the matrix - set-data is a wrapper for opencv's cvSetData
so now when I run - (get-real-2d is a wrapper for opencv's cvGetReal2d)
(get-real-2d a 0 0) it gets the element of matrix a at row 0 col 0 which is 0.0d0
the output from that command at repl is
CL-OPENCV> (get-real-2d a 0 0)
0.0d0
and now when I run
(get-real-2d a 0 1) it gets the element of matrix a at row 0 col 1 which is is 0.0d0
the output from that command at repl is
CL-OPENCV> (get-real-2d a 0 1)
1.0d0
and when I run this loop
(dotimes (i 3)
(dotimes (j 3)
(format t "~a~%" (get-real-2d a i j))))
the output from that command at repl is
CL-OPENCV> (dotimes (i 3)
(dotimes (j 3)
(format t "~a~%" (get-real-2d a i j))))
0.0d0
1.0d0
2.0d0
3.0d0
4.0d0
5.0d0
6.0d0
7.0d0
8.0d0
NIL
but when I try your method #Svante
(dotimes (i 3)
(dotimes (j 3)
(format t "~{~{~a~^ ~}~%~}" (get-real-2d a i j))))
I get error:
The value 0.0d0 is not of type LIST.
[Condition of type TYPE-ERROR]
because the output of 1 run of get-real-2d is just a 1 number float i/e
CL-OPENCV> (get-real-2d a 0 0)
0.0d0
with that info can you help me print the matrix so it looks like this
0.0d0 1.0d0 2.0d0
3.0d0 4.0d0 5.0d0
6.0d0 7.0d0 8.0d0
You can do that directly in the format directive. The format instructions ~{ and ~} descend into a list structure.
(format t "~{~{~a~^ ~}~%~}" matrix)
The outer pair of ~{ ~} loops over the first level of the matrix, so that the directives inside get to see one row at a time. The inner pair of ~{ ~} loops over each such row, so that the directives inside get to see one element at a time. ~A prints that element. The part between ~^ and ~} gets printed only between executions of the loop body, not at the end. ~% emits a #\Newline.
EDIT as requested
Note that the ~{ ~} replace the looping, and that I named the variable matrix, not element. You need to put the entire matrix there, and it is supposed to be in the form of a nested list. I deduced this from your statement that a is ((0 1 2) (3 4 5) (6 7 8)). So, (format t "~{~{~a~^ ~}~%~}" a).
If the matrix happens not to be in the form of a nested list but rather some kind of array, you really need to loop over the indices. Nested dotimes forms should be sufficient at first:
(fresh-line)
(dotimes (i (array-dimension array 0))
(dotimes (j (array-dimension array 1))
(format t "~a " (aref array i j)))
(terpri))
I don't know how your matrices map to arrays, so you will have to replace array-dimension and aref with your versions.
Your question can be understood in two ways, and that is why it has two solutions:
Define method for printing object of type matrix (in this case it may use the knowledge about the internal structure of matrix):
(defmethod print-object ((matrix matrix) stream)
(format stream "~{~{~a~^ ~}~%~}" matrix))
Using format as is shown in the answers.
Define client function that can use the only method of your object - get-real-2d:
(defun print-matrix (matrix dimension-x dimension-y)
(dotimes (x dimension-x)
(dotimes (y dimension-y)
(princ (get-real-2d matrix x y))
(princ #\Space))
(princ #\Newline)))
Just using dotimes.
Here are just the two dotimes loops that you were asking for. The only thing that you need to pay attention for is when to print spaces and when to print newlines.
(dotimes (i 3)
(dotimes (j 3)
(princ (get-real-2d a i j))
(if (< j 2)
(princ #\Space)
(terpri))))
Alternatively, you might want to use the format directives for floating point printing to have the numbers always aligned in nice columns. You can choose between ~F that will never print an exponent, ~E that will always print one, and ~G that behaves according to the magnitude. Look for details here in the HyperSpec: http://www.lispworks.com/documentation/HyperSpec/Body/22_cc.htm.
Here's an example that uses ~F with a maximum field width of 5 and 1 fractional digit:
(dotimes (i 3)
(dotimes (j 3)
(format t "~5,1F" (get-real-2d a i j)))
(terpri))
This isn't hard, so I'd rather leave it to you to figure out, but here are some tips to make a "slick loop" Lisp-style. I would suggest one or more instances of mapc (or mapcar), rather than dotimes. This may feel odd if you're not used to functional programming, but once you're used to it, it's easier to read than dotimes, and you don't have to keep track of the indexes, so it can avoid errors. You really should learn to use mapcar/mapc if you aren't already familiar with them. They are cool. Or if you want to be really cool :-) you could use recursion to iterate over the matrix, but I think that for this purpose iterating using mapc will be easier to read. (But you should learn the recursive way for other jobs. If you find recursion confusing--I have no reason to think you do, but some people have trouble with it--my favorite tutorial is The Little Schemer.)
You may also want to use other format directives that allow you pad numbers with spaces if they don't have enough digits. The ~% directive may be useful as well. Peter Seibel has a very nice introduction to format.

How to parse a decimal fraction into Rational in Haskell?

I've been participating in a programming contest and one of the problems' input data included a fractional number in a decimal format: 0.75 is one example.
Parsing that into Double is trivial (I can use read for that), but the loss of precision is painful. One needs to be very careful with Double comparisons (I wasn't), which seems redundant since one has Rational data type in Haskell.
When trying to use that, I've discovered that to read a Rational one has to provide a string in the following format: numerator % denominator, which I, obviously, do not have.
So, the question is:
What is the easiest way to parse a decimal representation of a fraction into Rational?
The number of external dependencies should be taken into consideration too, since I can't install additional libraries into the online judge.
The function you want is Numeric.readFloat:
Numeric Data.Ratio> fst . head $ readFloat "0.75" :: Rational
3 % 4
How about the following (GHCi session):
> :m + Data.Ratio
> approxRational (read "0.1" :: Double) 0.01
1 % 10
Of course you have to pick your epsilon appropriately.
Perhaps you'd get extra points in the contest for implementing it yourself:
import Data.Ratio ( (%) )
readRational :: String -> Rational
readRational input = read intPart % 1 + read fracPart % (10 ^ length fracPart)
where (intPart, fromDot) = span (/='.') input
fracPart = if null fromDot then "0" else tail fromDot

Binary to Integer -> Erlang

I have a binary M such that 34= will always be present and the rest may vary between any number of digits but will always be an integer.
M = [<<"34=21">>]
When I run this command I get an answer like
hd([X || <<"34=", X/binary >> <- M])
Answer -> <<"21">>
How can I get this to be an integer with the most care taken to make it as efficient as possible?
[<<"34=",X/binary>>] = M,
list_to_integer(binary_to_list(X)).
That yields the integer 21
As of R16B, the BIF binary_to_integer/1 can be used:
OTP-10300
Added four new bifs, erlang:binary_to_integer/1,2,
erlang:integer_to_binary/1, erlang:binary_to_float/1 and
erlang:float_to_binary/1,2. These bifs work similarly to how
their list counterparts work, except they operate on
binaries. In most cases converting from and to binaries is
faster than converting from and to lists.
These bifs are auto-imported into erlang source files and can
therefore be used without the erlang prefix.
So that would look like:
[<<"34=",X/binary>>] = M,
binary_to_integer(X).
A string representation of a number can be converted by N-48. For multi-digit numbers you can fold over the binary, multiplying by the power of the position of the digit:
-spec to_int(binary()) -> integer().
to_int(Bin) when is_binary(Bin) ->
to_int(Bin, {size(Bin), 0}).
to_int(_, {0, Acc}) ->
erlang:trunc(Acc);
to_int(<<N/integer, Tail/binary>>, {Pos, Acc}) when N >= 48, N =< 57 ->
to_int(Tail, {Pos-1, Acc + ((N-48) * math:pow(10, Pos-1))}).
The performance of this is around 100 times slower than using the list_to_integer(binary_to_list(X)) option.

Lua base converter

I need a base converter function for Lua. I need to convert from base 10 to base 2,3,4,5,6,7,8,9,10,11...36 how can i to this?
In the string to number direction, the function tonumber() takes an optional second argument that specifies the base to use, which may range from 2 to 36 with the obvious meaning for digits in bases greater than 10.
In the number to string direction, this can be done slightly more efficiently than Nikolaus's answer by something like this:
local floor,insert = math.floor, table.insert
function basen(n,b)
n = floor(n)
if not b or b == 10 then return tostring(n) end
local digits = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
local t = {}
local sign = ""
if n < 0 then
sign = "-"
n = -n
end
repeat
local d = (n % b) + 1
n = floor(n / b)
insert(t, 1, digits:sub(d,d))
until n == 0
return sign .. table.concat(t,"")
end
This creates fewer garbage strings to collect by using table.concat() instead of repeated calls to the string concatenation operator ... Although it makes little practical difference for strings this small, this idiom should be learned because otherwise building a buffer in a loop with the concatenation operator will actually tend to O(n2) performance while table.concat() has been designed to do substantially better.
There is an unanswered question as to whether it is more efficient to push the digits on a stack in the table t with calls to table.insert(t,1,digit), or to append them to the end with t[#t+1]=digit, followed by a call to string.reverse() to put the digits in the right order. I'll leave the benchmarking to the student. Note that although the code I pasted here does run and appears to get correct answers, there may other opportunities to tune it further.
For example, the common case of base 10 is culled off and handled with the built in tostring() function. But similar culls can be done for bases 8 and 16 which have conversion specifiers for string.format() ("%o" and "%x", respectively).
Also, neither Nikolaus's solution nor mine handle non-integers particularly well. I emphasize that here by forcing the value n to an integer with math.floor() at the beginning.
Correctly converting a general floating point value to any base (even base 10) is fraught with subtleties, which I leave as an exercise to the reader.
you can use a loop to convert an integer into a string containting the required base. for bases below 10 use the following code, if you need a base larger than that you need to add a line that mapps the result of x % base to a character (usign an array for example)
x = 1234
r = ""
base = 8
while x > 0 do
r = "" .. (x % base ) .. r
x = math.floor(x / base)
end
print( r );

Resources