I have a keras layer which outputs N timestamps of size M (thus NxM size). I want to append the same vector of size 1xK to all time stamps, so the output should have N timestamps of size M+K. If I use the Concatenate layer like this:
x = Concatenate()[x, v]
It gives an error since the dimensions do not match. And if I use a TimeDistributed wrapper like this:
x = TimeDistributed(Concatenate())[x, v]
It gives an error since vector v does not have time stamps.
Which is the easiest way of doing this?
Thanks!!
First, duplicate your vector N times using RepeatVector:
v = RepeatVector(N)(v) # shape == (N, K)
Then, as their shapes are matching now ((N, M) and (N, K)), you can concatenate them:
x = Concatenate()([x, v]) # shape == (N, M+K)
If N is unknown you can do this manually using the corresponding backend functions in a lambda layer:
from keras import backend as K
def func(xv):
x, v = xv
n = x.shape[1]
v = K.repeat(v, n)
return K.concatenate((x, v))
x = Lambda(lambda xv: func(xv))([x, v])
Related
like in the code below, Is there any function in Z3 to get all the clauses of a formula(as a CNF)?
x = Boolean('x')
y = Boolean('y')
f = And(x, Or(x,y),And(x,Not(x,y))
# can I get all the clauses of formula f stored in a list
You can do something like the following:
from z3 import *
x = Bool('x') # Note: Bool() rather than Boolean()
y = Bool('y')
z = Bool('z')
f = And(x, Or(x,y), And(x, z == Not(y)))
# from https://stackoverflow.com/a/18003288/1911064
g = Goal()
g.add(f)
# use describe_tactics() to get to know the tactics available
t = Tactic('tseitin-cnf')
clauses = t(g)
for clause in clauses[0]:
print(clause)
Output is a list of disjunctive clauses:
x
Or(x, y)
Or(y, z)
Or(Not(y), Not(z))
Your original expression is not satisfiable.
What is Not(x, y) supposed to do?
As simpler way to convert (nested) Boolean expressions to CNF is provided by bc2cnf.
I'm working with the MathNumerics library and just attempting to sum all items in the matrix. It appears to not work. Here is what I've got...It appears to be wanting some type annotations somewhere with the sum. I tried the below annotations, but it seems to think that float is incompatible...?
let SumSquares (theta:Vector<float>) (y:float) (trainingData:Matrix<float>) : CostFunction =
let m = trainingData.RowCount
trainingData
|> Matrix.mapRows(fun a r -> vector[ square((theta * r) - y) ])
|> Matrix.sum<float,float>()
I have a torch tensor of size (1 x n x n x n) and I would like to randomly choose one of the last 3 dimensions to randomly slice at s and then do. For example it could output the below tensors with equal probability;
(1 x s x n x n)
(1 x n x s x n)
(1 x n x n x s)
I realise I could just do a few if else statements but I am curious if there is a "neater" option using a function like torch.random(1,4) to select the dimension.
assuming that you want to narrow a block of s elements randomly, out of n elements.
Let's use :narrow.
n = 100
s = 20
x = torch.randn(1, n, n, n)
y = x:narrow(torch.random(2, 4), torch.random(1, n - s + 1), s)
I have a vector V of 256 position. For each position in the vector I want to calculate the density from a Gaussian distribution:
V[0] -> p(V[0])
V[1] -> p(V[1])
...
V[255] -> p(V[255])
After that I want to multiply all the probabilities together:
p(V[0]) * p(V[1]) * ... * p(V[255])
The problem with this method in my implementation is that each probability is high (around ~400), so I cannot multiply everything together.
A workaround for that would be taking the log of each Gaussian and then adding all together:
V[0] -> log(p(V[0]))
V[1] -> log(p(V[1]))
...
V[255] -> log(p(V[255]))
log(p(V[0])) + log(p(V[1])) + ... + log(p(V[255]))
But when I try to do that I get an error when the Gaussian results in zero.
Having this in mind, is there any workaround the log(0) problem? Would be an accurate representation replace log(0)for 0?
EDIT:
So, for the record, If I try the normal method (multiplying), the error that I get is this one:
iex(6)> Naive.CLI.main(["~/data/usps.csv", "~/indices/17.csv"])
** (ArithmeticError) bad argument in arithmetic expression
:erlang.*(417.62100246853674, 6.504406716503509e307)
From what I understand the numbers are too high for the multiplication take effect.
Here is what I did (and I think should be the right idea):
def gaussian(vector, mean, variance) do
vector
|> Enum.zip(mean)
|> Enum.zip(variance)
|> Enum.map(fn {{e, m}, v} -> {e, m, v} end)
|> Enum.map(fn {e, m, v} -> calculate(e, m, v) end)
|> Enum.map(fn e ->
if e == 0.0 do
0.0
else
:math.log10(e)
end
end)
end
defp calculate(elem, mean, variance) do
(1/:math.sqrt(2*:math.pi*variance)) *
:math.exp(-0.5*(((elem - mean)*(elem - mean)) / variance))
end
Basically if the Gaussian results in zero, I return zero as well.
I have simple Z3 python code like below. I expect the "print" line will return me "y" which was stored in the line above it. Instead, I got back "A[x]" as result.
I = IntSort()
A = Array('A', I, I)
x = Int('x')
y = Int('y')
Store(A, x, y)
print Select(A,x)
Why does not Select() return the value stored by Store()?
Thanks.
There are two things to note:
First:
When you write
Store(A, x, y)
You create a term with three arguments , A, x, and y.
There is no side-effect to A.
You can create a name for this term by writing
B = Store(A,x,y)
Second:
Z3 does not simplify terms unless you want it to.
The python API exposes a simplification function called simplify.
You can obtain the reduced term by calling the simplifier.
The example is:
I = IntSort()
A = Array('A', I, I)
x = Int('x')
y = Int('y')
B = Store(A, x, y)
print Select(B,x)
print simplify (Select(B,x))