Z3 Solving verilog assign statement - z3

I ran the below code in z3 (java api):
BitVecExpr a = ctx.mkBVConst("a",8);
BitVecExpr b = ctx.mkBVConst("b",8);
BitVecExpr c = ctx.mkBVConst("c",8);
BitVecExpr d = (BitVecNum) ctx.mkNumeral("11",ctx.mkBitVecSort(8));
Solver s = ctx.mkSolver();
s.add(ctx.mkEq(c,ctx.mkBVXOR(a,b)));
s.add(ctx.mkEq(c,d));
Model m = s.getModel();
System.out.println("m.eval(a) " + m.eval(a,false)); /*m.eval(a) 0 */
System.out.println("m.eval(b) " + m.eval(b,false)); /*m.eval(b) 11 */
System.out.println("m.eval(c) " + m.eval(c,false)); /*m.eval(c) 11 */
My questions are:
How to pass a hexadecimal value for d? Does it only take a decimal value?
Is there any other way to pass bit value to d?

Related

undefined reference to `dgemm_' in gfortran in windows subsystem ubuntu

I have the following Fortran code from https://software.intel.com/content/www/us/en/develop/documentation/mkl-tutorial-fortran/top/multiplying-matrices-using-dgemm.html
I am trying to use gfortran complile it (named as dgemm.f90)
! Fortran source code is found in dgemm_example.f
PROGRAM MAIN
IMPLICIT NONE
DOUBLE PRECISION ALPHA, BETA
INTEGER M, K, N, I, J
PARAMETER (M=2000, K=200, N=1000)
DOUBLE PRECISION A(M,K), B(K,N), C(M,N)
PRINT *, "This example computes real matrix C=alpha*A*B+beta*C"
PRINT *, "using Intel(R) MKL function dgemm, where A, B, and C"
PRINT *, "are matrices and alpha and beta are double precision "
PRINT *, "scalars"
PRINT *, ""
PRINT *, "Initializing data for matrix multiplication C=A*B for "
PRINT 10, " matrix A(",M," x",K, ") and matrix B(", K," x", N, ")"
10 FORMAT(a,I5,a,I5,a,I5,a,I5,a)
PRINT *, ""
ALPHA = 1.0
BETA = 0.0
PRINT *, "Intializing matrix data"
PRINT *, ""
DO I = 1, M
DO J = 1, K
A(I,J) = (I-1) * K + J
END DO
END DO
DO I = 1, K
DO J = 1, N
B(I,J) = -((I-1) * N + J)
END DO
END DO
DO I = 1, M
DO J = 1, N
C(I,J) = 0.0
END DO
END DO
PRINT *, "Computing matrix product using Intel(R) MKL DGEMM "
PRINT *, "subroutine"
CALL DGEMM('N','N',M,N,K,ALPHA,A,M,B,K,BETA,C,M)
PRINT *, "Computations completed."
PRINT *, ""
PRINT *, "Top left corner of matrix A:"
PRINT 20, ((A(I,J), J = 1,MIN(K,6)), I = 1,MIN(M,6))
PRINT *, ""
PRINT *, "Top left corner of matrix B:"
PRINT 20, ((B(I,J),J = 1,MIN(N,6)), I = 1,MIN(K,6))
PRINT *, ""
20 FORMAT(6(F12.0,1x))
PRINT *, "Top left corner of matrix C:"
PRINT 30, ((C(I,J), J = 1,MIN(N,6)), I = 1,MIN(M,6))
PRINT *, ""
30 FORMAT(6(ES12.4,1x))
PRINT *, "Example completed."
STOP
END
By gfortran -lblas -llapack dgemm.f90, I got
/tmp/ccUtHQz1.o: In function `MAIN__':
dgemm.f90:(.text+0x794): undefined reference to `dgemm_'
collect2: error: ld returned 1 exit status
I searched that this type of question has been asked time to time, but I haven't found a solution for my case :(
I tried to use python load blas, based on https://software.intel.com/content/www/us/en/develop/articles/using-intel-mkl-in-your-python-programs.html
from ctypes import *
mkl = cdll.LoadLibrary("./anaconda3/lib/libmkl_rt.so")
dgemm = mkl.cblas_dgemm
def print_mat(mat, m, n):
for i in xrange(0,m):
print " ",
for j in xrange(0,n):
print mat[i*n+j],
print
Order = 101 # 101 for row-major, 102 for column major data structures
TransA = 111 # 111 for no transpose, 112 for transpose, and 113 for conjugate transpose
TransB = 111
m = 2
n = 4
k = 3
lda = k
ldb = n
ldc = n
alpha = 1.0
beta = -1.0
amat = c_double * 6
bmat = c_double * 12
cmat = c_double * 8
a = amat(1,2,3, 4,5,6)
b = bmat(0,1,0,1, 1,0,0,1, 1,0,1,0)
c = cmat(5,1,3,3, 11,4,6,9)
print "\nMatrix A ="
print_mat(a,2,3)
print "\nMatrix B ="
print_mat(b,3,4)
print "\nMatrix C ="
print_mat(c,2,4)
print "\nCompute", alpha, "* A * B + ", beta, "* C"
dgemm( c_int(Order), c_int(TransA), c_int(TransB), c_int(m), c_int(n), c_int(k), c_double(alpha), byref(a), c_int(lda), byref(b), c_int(ldb), c_double(beta), byref(c), c_int(ldc))
print_mat(c,2,4)
print
The above code works. Hence, the question may be related to use mkl with gfortran?
You should follow Intel's website to set the compiler flags for gfortran + MKL. Otherwise your will be linking with something else.
https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onemkl/link-line-advisor.html

variables changing value in fortran after calling random_seed

My problem is the following... I have an integer which changes value after calling the random seed function. The value is passed as a parameter and should be 20.
These are the relevant parts of the code:
implicit none
extrenal masres
integer :: i, j, k, n, nc, nr, irch, ncor, ierr, nfil21
...
write(*,*) 'n = ', n
del = 0.d+0
del1 = 0.d+0
call random_seed(size=n)
write(*,*) 'n = ', n
And my output is:
n = 20
n = 33
I tried a workaround which failed too:
write(*,*) 'n = ', n
k = n
del = 0.d+0
del1 = 0.d+0
call random_seed(size=k)
write(*,*) 'k = ', k
allocate(seed(n))
call system_clock(COUNT=clock)
if(info.eq.1) then
seed = clock+37*[(1-i**2/1+i**2,i=1,n)]
else
seed = into+37*[(i+24)**2,i=1,n]
endif
call random_seed(put=seed)
What I get is:
n = 20
k = 33
Fortran runtime error: Array size of PUT is too small.
This is obvious, as the random seed is apparently initialized with 33 instead of 20, so the allocated seed array will be too small.
I don't know if it matters, but I use the gfortran compiler.
Thank you in advance for your help. Tell me if I should post more code or upload the file.

syntax: unexpected "=" in Julia

I have written this segment of Julia code which filters keypoints in an image by an interpolation of the fields in of the extremum from a difference in gaussian scale space.
mutable struct discrete_extremum
o
s
m
n
intensity
end
mutable struct candidateKeypoint
oE
s
m
n
σ
x
y
ω
end
LB = Array{candidateKeypoint}(0)
for extremum in LA′
for i = 1:5
H̄ = Hessian(extremum.o, extremum.s, extremum.m, extremum.n)
ḡ = ThreeDgradient(extremum.o, extremum.s, extremum.m, extremum.n)
α⋆ = alphaStar(H̄, ḡ)
ω = omega(H̄, ḡ, extremum.o, extremum.s, extremum.m, extremum.n)
δOE = δMin * 2^(extremum.o - 1)
α1⋆ = α⋆[1]
α2⋆ = α⋆[2]
α3⋆ = α⋆[3]
σ = (δOE/δMin) * σMin * 2^((α1⋆ + extremum.s)/nSpo)
x = δOE * (α2⋆ + extremum.m)
y = δOE * (α2⋆ + extremum.n)
extremum.s, extremum.m, extremum.n = round(Int64, extremum.s + α1⋆), round(Int64, extremum.m + α2⋆), round(Int64, extremum.n + α3⋆)
if max(abs(α1⋆), abs(α2⋆), abs(α3⋆)) < 0.6
break
end
end
if max(abs(α1⋆), abs(α2⋆), abs(α3⋆)) < 0.6
push!(LB, candidateKeypoint(extremum, σ, x, y, ω))
end
end
I am getting this error:
syntax: unexpected "="
There is no line number mentioned. Can you guys suggest what I am doing wrong?
P.S. I have posted the question here as well.
I guess the problem is your α⋆, α1⋆, α2⋆, α3⋆ variables, because the symbol ⋆ is a Julia operator. In general, using unicode characters is fine but try to avoid using operator symbols that already have a meaning in the language.
You could replace your α⋆ by α′ (\alpha followed by \prime) or ̂α (\hat followed by \alpha) for example. However, don't go overboard with non-ASCII characters. They are there to make code more readable, not less.
UPDATE: Incorporated comments by #DNF and #phg.

Pure Lua hashing method

This has been bothering be for a while now, I cannot seem to find a pure Lua implementation of a popular hashing method like SHA256, SHA512, or Whirlpool. I need it as I will be hashing the password client side before sending it of to a server. Speed isn't a worry here, I don't care if it takes 10 or so seconds to hash 10,000 times, I will be running it on a thread.
I have tried a couple before, which seemed like they worked perfectly fine at first, but when I tried a different input strings (usually longer ones), the hash comes out as a totally incorrect hash output.
I am using the LuaJIT version of Love2D, so it already has the BitOp library implemented. If any of you know any good implementations of these hashing methods or any similar secure ones then please let me know!
Thank you!
UPDATE: Here are some results!
Firstly this is the test code I am using.
https://github.com/JustAPerson/LuaCrypt
INPUT: Test string
OUTPUT: a3e49d843df13c2e2a7786f6ecd7e0d184f45d718d1ac1a8a63e570466e489dd
EXPECTED: a3e49d843df13c2e2a7786f6ecd7e0d184f45d718d1ac1a8a63e570466e489dd
INPUT: This is a test string to hash
OUTPUT: 05b4ac920d4130cb9d9bb046cac7476f35d7404cf116dc8d6d4a113c3c79d904
EXPECTED: f70b476ff948472f8e4e52793a5a2779e636c20dd5336d3a8a4455374318db35
https://bitbucket.org/Boolsheet/sil/raw/tip/hash.lua
INPUT: Test string
OUTPUT: 8f1a5b37fbe986953c343d5b839b14843c6c29d47a6a7e52f263cd82ad6141a3
EXPECTED: a3e49d843df13c2e2a7786f6ecd7e0d184f45d718d1ac1a8a63e570466e489dd
INPUT: This is a test string to hash
OUTPUT: 167bf7b9000442419b3016a6e1edfcc7c8d40b5f0b80518a31ddb0bbd388e87ac
EXPECTED: f70b476ff948472f8e4e52793a5a2779e636c20dd5336d3a8a4455374318db35
I would recommend against using SHA256 for passwords. They are easy to bruteforce nowadays, and the way you are using them is vulnerable to replay attacks.
Also if you must use SHA256, use the version from OpenSSL if possible (especially if your program already depends on OpenSSL.)
But if you must use it (and cannot link with OpenSSL, but can use FFI) here is a LuaJIT version of SHA256 (only) that I am using in one of my projects.
local bit = require 'bit'
local ffi = require 'ffi'
local type = type
local band, bnot, bswap, bxor, rol, ror, rshift, tobit =
bit.band, bit.bnot, bit.bswap, bit.bxor, bit.rol, bit.ror, bit.rshift, bit.tobit
local min, max = math.min, math.max
local C = ffi.C
local istype, new, fill, copy, cast, sizeof, ffi_string =
ffi.istype, ffi.new, ffi.fill, ffi.copy, ffi.cast, ffi.sizeof, ffi.string
local sha256 = {}
ffi.cdef [[
void *malloc(size_t size);
void free(void *ptr);
]]
local ctHashState = ffi.typeof 'uint32_t[8]'
local cbHashState = ffi.sizeof(ctHashState)
local ctBlock = ffi.typeof 'uint32_t[64]'
local cbBlock = ffi.sizeof(ctBlock)
local ctpu8 = ffi.typeof 'uint8_t *'
local ctpcu8 = ffi.typeof 'const uint8_t *'
local ctpu32 = ffi.typeof 'uint32_t *'
local ctpu64 = ffi.typeof 'uint64_t *'
-- This struct is used by the 'preprocess' iterator function. It keeps track
-- of the end of the input string + the total input length in bits + a pointer
-- to the block buffer (where expansion takes place.)
local ctBlockIter
local cmtBlockIter = {}
function cmtBlockIter.__sub(a, b)
if istype(ctBlockIter, a) then a = a.limit end
if istype(ctBlockIter, b) then b = b.limit end
return a - b
end
function cmtBlockIter:__tostring()
return string.format("<ctBlockIter: limit=%s; keyLength=%s>",
tostring(self.base), tostring(self.keyLength))
end
ctBlockIter = ffi.metatype([[
struct {
const uint8_t *limit;
uint32_t *blockBuffer;
uint64_t keyLength;
}
]], cmtBlockIter)
-- Initial state of the hash
local init_h = new('const uint32_t[8]', {
0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,
0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19
})
-- Constants used in the add step of the compression function
local k = new('const uint32_t[64]', {
0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3,
0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc,
0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7,
0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13,
0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3,
0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5,
0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208,
0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
})
-- Expand block from 512 to 2048 bits
local function expand(w)
for i = 16, 63 do
local s0 = bxor(ror(w[i-15], 7), ror(w[i-15], 18), rshift(w[i-15], 3))
local s1 = bxor(ror(w[i-2], 17), ror(w[i-2], 19), rshift(w[i-2], 10))
w[i] = w[i-16] + s0 + w[i-7] + s1
end
end
-- Process one expanded block and update the hash state
local function compress(hh, w)
local a, b, c, d, e, f, g, h =
hh[0],hh[1],hh[2],hh[3],hh[4],hh[5],hh[6],hh[7]
for i = 0, 63 do
local S1 = bxor(ror(e, 6), ror(e, 11), ror(e, 25))
local ch = bxor(band(e, f), band(bnot(e), g))
local t = tobit(h + S1 + ch + k[i] + w[i])
local S0 = bxor(ror(a, 2), ror(a, 13), ror(a, 22))
local maj = bxor(band(a, bxor(b, c)), band(b, c))
a, b, c, d, e, f, g, h =
tobit(t + S0 + maj),
a, b, c,
tobit(d + t),
e, f, g
end
hh[0],hh[1],hh[2],hh[3],hh[4],hh[5],hh[6],hh[7] =
hh[0]+a, hh[1]+b, hh[2]+c, hh[3]+d,
hh[4]+e, hh[5]+f, hh[6]+g, hh[7]+h
end
-- Take a 512-bit chunk from the input.
-- If it is the final chunk, also add padding
local keyLengthOfs = ffi.offsetof(ctBlockIter, 'keyLength')
local function nextBlock(state, input)
local w = state.blockBuffer
local cLen = min(state - input, 64)
if cLen < -8 then return nil end
fill(w, 256, 0)
copy(w, input, max(0, cLen))
if 0 <= cLen and cLen < 64 then
copy(cast(ctpu8, w)+cLen, '\128', 1)
end
for i = 0, 15 do w[i] = bswap(w[i]) end
if cLen <= (64-8-1) then
copy(cast(ctpu64, w) + 7, cast(ctpu8, state) + keyLengthOfs, 8)
w[14], w[15] = w[15], w[14]
end
input = input + 64
return input
end
-- Iterator that yields one block (possibly padded) at a time from the input
local function preprocess(input, len, w)
len = len or (type(input) == 'string' and #input or sizeof(input))
input = cast(ctpu8, input)
local it = new(ctBlockIter)
it.blockBuffer = w
it.limit = input+len
it.keyLength = len*8
return nextBlock, it, input
end
-- Compute a binary hash (32-byte binary string) from the input
function sha256.binFromBin(input, len)
local h = new(ctHashState)
local w = cast(ctpu32, C.malloc(cbBlock))
copy(h, init_h, cbHashState)
for _ in preprocess(input, len, w) do
expand(w)
compress(h, w)
end
for i = 0, 7 do h[i] = bswap(h[i]) end
C.free(w)
return ffi_string(h, 32)
end
local hexDigits = new('char[16]', "0123456789abcdef")
local hexOut = new('char[65]')
-- Compute the hash and convert to hexadecimal
function sha256.hexFromBin(input, len)
local h = new(ctHashState)
local w = cast(ctpu32, C.malloc(cbBlock))
copy(h, init_h, cbHashState)
for _ in preprocess(input, len, w) do
expand(w)
compress(h, w)
end
for i = 0, 7 do
local w = h[i]
for j = 0, 3 do
w = rol(w, 8)
hexOut[i*8 + j*2] = hexDigits[band(rshift(w, 4), 15)]
hexOut[i*8 + j*2 + 1] = hexDigits[band(w, 15)]
end
end
C.free(w)
return ffi_string(hexOut, 64)
end
return sha256
There is an implementation of SHA256 at the Lua User's Wiki. The page observes it is Lua 5.2. I would imagine that it would be practical to make that work in LuaJIT without too much trouble.
Do pay attention to the larger security issues surrounding passwords and authentication. The usual advice applies; rolling your own security rather than using an existing tested and supported implementation is not something to be done lightly.
Since you are using LuaJIT, you should be able to leverage its very powerful FFI capabilities to use crypto supplied on your native platform. That will likely require writing some FFI-flavored Lua that is platform specific to each platform on which your client expects to run, but from what I've seen by lurking in the LuaJIT mailing list that shouldn't be too painful.

z3 fails with this system of equations

Over the years I keep track of solving technology - and I maintain a blog post about applying them to a specific puzzle - the "crossing ladders".
To get to the point, I accidentally found out about z3, and tried putting it to use in the specific problem. I used the Python bindings, and wrote this:
$ cat laddersZ3.py
#!/usr/bin/env python
from z3 import *
a = Int('a')
b = Int('b')
c = Int('c')
d = Int('d')
e = Int('e')
f = Int('f')
solve(
a>0, a<200,
b>0, b<200,
c>0, c<200,
d>0, d<200,
e>0, e<200,
f>0, f<200,
(e+f)**2 + d**2 == 119**2,
(e+f)**2 + c**2 == 70**2,
e**2 + 30**2 == a**2,
f**2 + 30**2 == b**2,
a*d == 119*30,
b*c == 70*30,
a*f - 119*e + a*e == 0,
b*e - 70*f + b*f == 0,
d*e == c*f)
Unfortunately, z3 reports...
$ python laddersZ3.py
failed to solve
The problem does have at least this integer solution: a=34, b=50, c=42, d=105, e=16, f=40.
Am I doing something wrong, or is this kind of system of equations / range constraints beyond what z3 can solve?
Thanks in advance for any help.
UPDATE, 5 years later: Z3 now solves this out of the box.
You can solve this using Z3 if you encode the integers as reals, which will force Z3 to use the nonlinear real arithmetic solver. See this for more details on the nonlinear integer vs. real arithmetic solvers: How does Z3 handle non-linear integer arithmetic?
Here's your example encoded as reals with the solution (z3py link: http://rise4fun.com/Z3Py/1lxH ):
a,b,c,d,e,f = Reals('a b c d e f')
solve(
a>0, a<200,
b>0, b<200,
c>0, c<200,
d>0, d<200,
e>0, e<200,
f>0, f<200,
(e+f)**2 + d**2 == 119**2,
(e+f)**2 + c**2 == 70**2,
e**2 + 30**2 == a**2,
f**2 + 30**2 == b**2,
a*d == 119*30,
b*c == 70*30,
a*f - 119*e + a*e == 0,
b*e - 70*f + b*f == 0,
d*e == c*f) # yields [a = 34, b = 50, c = 42, d = 105, e = 16, f = 40]
While the result is integer as you noted, and as what Z3 finds, Z3 apparently needs to use the real arithmetic solver to handle it.
Alternatively, you can leave the variables declared as integers and do the following from the suggestion at the referenced post:
t = Then('purify-arith','nlsat')
s = t.solver()
solve_using(s, P)
where P is the conjunction of the constraints (z3py link: http://rise4fun.com/Z3Py/7nqN ).
Rather than asking Z3 for a solution in reals, you could ask the solver of the Microsoft Solver Foundation:
using Microsoft.SolverFoundation.Services;
static Term sqr(Term t)
{
return t * t;
}
static void Main(string[] args)
{
SolverContext context = SolverContext.GetContext();
Domain range = Domain.IntegerRange(1, 199); // integers ]0; 200[
Decision a = new Decision(range, "a");
Decision b = new Decision(range, "b");
Decision c = new Decision(range, "c");
Decision d = new Decision(range, "d");
Decision e = new Decision(range, "e");
Decision f = new Decision(range, "f");
Model model = context.CreateModel();
model.AddDecisions(a, b, c, d, e, f);
model.AddConstraints("limits",
sqr(e+f) + d*d == 119*119,
sqr(e+f) + c*c == 70*70,
e*e + 30*30 == a*a,
f*f + 30*30 == b*b,
a*d == 119*30,
b*c == 70*30,
a*f - 119*e + a*e == 0,
b*e - 70*f + b*f == 0,
d*e == c*f);
Solution solution = context.Solve();
Report report = solution.GetReport();
Console.WriteLine("a={0} b={1} c={2} d={3} e={4} f={5}", a, b, c, d, e, f);
Console.Write("{0}", report);
}
The solver comes up with the solution you mentioned within fractions of a second. The Express Edition used to be free, but I am not sure about the current state.
a: 34
b: 50
c: 42
d: 105
e: 16
f: 40
There is no algorithm that, in general, can answer whether a multivariate polynomial equation (or a system thereof, as in your case) has integer solution (this is the negative answer to Hilbert's tenth problem). Thus, all solving methods for integers are either restricted to certain classes (e.g. linear equations, polynomials in one variable...) or use incomplete tricks, such as:
Linearizing expressions
Encoding equations into finite-bitwidth
numbers (ok for searching for "small" solutions).
This is why Z3 needs to be told to use the real number solver.

Resources