I have a mesh network as shown in figure.
Now, I am allocating values to all edges in this sat network. I want to propose in my program that, there are no closed loops in my allocation. For example the constraint for top-left most square can be written as -
E0 = 0 or E3 = 0 or E4 = 0 or E7 = 0, so either of the link has to be inactive in order not to form a loop. However, in this kind of network, there are many possible loops.
For example loop formed by edges - E0, E3, E7, E11, E15, E12, E5, E1.
Now my problem is that I have to describe each possible combination of loop which can occur in this network. I tried to write constraints in one possible formula, however I was not able to succeed.
Can anyone throw any pointers if there is a possible way to encode this situation?
Just for information, I am using Z3 Sat Solver.
The following encoding can be used with any graph with N nodes and M edges. It is using (N+1)*M variables and 2*M*M 3-SAT clauses. This ipython notebook demonstrates the encoding by comparing the SAT solver results (UNSAT when there is a loop, SAT otherwise) with the results of a straight-forward loop finding algorithm.
Disclaimer: This encoding is my ad-hoc solution to the problem. I'm pretty sure that it is correct but I don't know how it compares performance-wise to other encodings for this problem. As my solution works with any graph it is to be expected that a better solution exists that uses some of the properties of the class of graphs the OP is interested in.
Variables:
I have one variable for each edge. The edge is "active" or "used" if its corresponding variable is set. In my reference implementation the edges have indices 0..(M-1) and this variables have indices 1..M:
def edge_state_var(edge_idx):
assert 0 <= edge_idx < M
return 1 + edge_idx
Then I have an M bits wide state variable for each edge, or a total of N*M state bits (nodes and bits are also using zero-based indexing):
def node_state_var(node_idx, bit_idx):
assert 0 <= node_idx < N
assert 0 <= bit_idx < M
return 1 + M + node_idx*M + bit_idx
Clauses:
When an edge is active, it links the state variables of the two nodes it connects together. The state bits with the same index as the node must be different on both sides and the other state bits must be equal to their corresponding partner on the other node. In python code:
# which edge connects which nodes
connectivity = [
( 0, 1), # edge E0
( 1, 2), # edge E1
( 2, 3), # edge E2
( 0, 4), # edge E3
...
]
cnf = list()
for i in range(M):
eb = edge_state_var(i)
p, q = connectivity[i]
for k in range(M):
pb = node_state_var(p, k)
qb = node_state_var(q, k)
if k == i:
# eb -> (pb != qb)
cnf.append([-eb, -pb, -qb])
cnf.append([-eb, +pb, +qb])
else:
# eb -> (pb == qb)
cnf.append([-eb, -pb, +qb])
cnf.append([-eb, +pb, -qb])
So basically each edge tries to segment the graph it is part of into a half that is on one side of the edge and has all the state bits corresponding to the edge set to 1 and a half that is on the other side of the edge and has the state bits corresponding to the edge set to 0. This is not possible for a loop where all nodes in the loop can be reached from both sides of each edge in the loop.
Related
I'm trying to find a way to encode a sort of basic subgraph isomorphism in Z3 (preferably z3py). While I know there are papers on this in the abstract, finding any mechanism to do it has eluded me even for very trivial cases, because I'm very new to Z3 in general!
Suppose you have just about the most basic subgraph with nodes (0,1,2) and edges (0,1) with node 2 off on its own, and the supergraph has nodes (0,1,2) and edges (1,2) with node 0 off on its own. You could map the nodes of the subgraph into the supergraph with
0->1,
1->2,
2->0
...as one possible mapping that would satisfy "if these two nodes are connected in the subgraph, their mapped nodes are connected in the supergraph"
So okay :) I tried
from networkx import Graph
from networkx.linalg.graphmatrix import adjacency_matrix
subgraph = Graph()
subgraph.add_nodes_from([0,1,2])
subgraph.add_edges_from([(0,1)])
supergraph = Graph()
supergraph.add_nodes_from([0,1,2])
supergraph.add_edges_from([(1,2)])
s = Solver()
assignments = [Int(f'n{node}') for node in subgraph.nodes]
# each bit assignment in the subgraph belongs to one in the supergraph
assignment_constraint = [ And(assignments[i] >= 0, assignments[i] <= max(supergraph.nodes)) for i in subgraph.nodes ]
# subgraph bits can't be assigned to the same supergraph bits
assignment_distinct = [ Distinct([assignments[i] for i in subgraph.nodes])]
which just gets me as far as "each assignment from subgraph to supergraph should map a node in the subgraph to some node in the supergraph and no two subgraph nodes can be assigned to the same supergraph node"
...but then I get stuck because I keep thinking along the lines of
for edge in subgraph.edges:
s.add( (assignments[edge[0]], assignments[edge[1]]) in supergraph.edges )
...but of course that doesn't work because pythonically those aren't the right sort of keys so that's always false or broken.
So how does one approach that? I can add constraints like "this_var == 1" but get very confused on things like checking membership, ie
>>> assignments[0] == 1.0
n0 == 1 # so that's OK then
>>> assignments[0] in [1.0, 2.0, 3.0]
False # woops, that fails horribly
and I feel like I'm missing a very basic "frame of mind" thing here.
It is relatively straightforward to encode subgraph isomorphism in z3, pretty much along the lines of how you described. However, this encoding is unlikely to scale to large graphs. As you no doubt know, subgraph isomorphism is NP-complete in general, and this encoding will cause z3 to simply enumerate all possibilities and thus will blow up exponentially.
Having said that, here's a straightforward encoding:
from z3 import *
# Subgraph, number of nodes and edges.
# Nodes will be named implicitly from 0 to noOfNodesA - 1
noOfNodesA = 3
edgesA = [(0, 1)]
# Supergraph:
noOfNodesB = 3
edgesB = [(1, 2)]
# Mapping of subgraph nodes to supergraph nodes:
mapping = Array('Map', IntSort(), IntSort())
s = Solver()
# Check that elt is between low and high, inclusive
def InRange(elt, low, high):
return And(low <= elt, elt <= high)
# Check that (x, y) is in the list
def Contains(x, y, lst):
return Or([And(x == x1, y == y1) for x1, y1 in lst])
# Make sure mapping is into the supergraph
s.add(And([InRange(Select(mapping, n1), 0, noOfNodesB-1) for n1 in range(noOfNodesA)]))
# Make sure we map nodes to distinct nodes
s.add(Distinct([Select(mapping, n1) for n1 in range(noOfNodesA)]))
# Make sure edges are preserved:
for x, y in edgesA:
s.add(Contains(Select(mapping, x), Select(mapping, y), edgesB))
# Solve:
r = s.check()
if r == sat:
m = s.model()
for x in range(noOfNodesA):
print ("%s -> %s" % (x, m.evaluate(Select(mapping, x))))
else:
print ("Solver said: %s" % r)
I've added comments along the way, so hopefully you should be able to read the code through; feel free to ask specific questions.
When I run this, I get:
$ python a.py
0 -> 1
1 -> 2
2 -> 0
which finds exactly the mapping you alluded to in your question.
Best of luck!
I am working on my CNN project in Verilog , but I am having some problems of implementing convolution procedure of Image with 3x3 Filter. I wrote a code for convolutional module, but now when it comes to convolution, I have to read the values from memory, which contains the pixels of the image. The thing is that I have to read these values in particular order, since convolution takes the dot product of 2 matrices and then strides it by 1 to the right. So let's say if the image is 5x5 matrix which stored in a memory array
[ a1 a2 a3 a4 a5
a6 a7 a8 a9 a10
a11 a12 a13 a14 a15 ] - memory Ram
how can I read the values of the memory in the following order:
a1 then a2 then a3 , then a6 then a7 then a8, and last row a11 a12 a13, and then stride and start over starting with a2 , a3, etc. til I reach the end of my array. Please suggest any solution how I should address the memory in this situation, the code snippet would be highly appreciated. Thank you.
p.s. my memory array will contain a lot of data, approximately will be a matrix of [400x300] , where the filter is [3x3].
Looks like a simple case of nested for-loops. This walk through the 16-entry memory as you wanted:
for (start=0; start<3; start=start+1)
for(i=1; i<16; i=i+5)
for (j=0; j<3; j=j+1)
data = mem[start+i+j]; // C: printf("%d\n",start+i+j);
Note that the code is both C and Verilog compatible so you can test your sequence in a C-compiler if you want (I did).
If you don't like the for loops you can make them into counters. In HDL you always reverse the order and start with the inner loop:
if (j<3)
j <= j + 1;
else
begin
j <=0;
if (i<16) // should be 15 if you start from 0
i <= i + 5;
else
begin
i <= 1; // You sure it should not be zero?
if (start<3)
start <= start + 1;
else
begin
start <= 0;
all_done <= 1'b1
end // nd of start
end // end of j
end // end of i
In a different part pf the design you can now use start+i+j as address.
Last : I would start with indices 0,1,2 as your picture is likely to start from memory address 0. You need to change the 'i' loop for that.
(HDL code is not compiled or tested)
Algo question
Binary array of 0/1 given
In one operation i can flip any array[index] of array i.e. 0->1 or 1->0
so aim is to minimize the maximum lenth of continious 1's or 0's by using atmost k flips
eg if 11111 if array and k=1 ,best is to make array as 11011
And minimized value of maximum continous 1's or 0's is 2
for 111110111111 and k=3 ans is 2
I tried Brute Force (by trying various position flips) but its not efficient
I think Greedy ,but can not figure out exactly
can you please help me for algo,O(n) or similar
A solution could be devised by reading each bit in order and recording the size of each continuous group of 1 into a list A.
Once you are done filling A, you can follow the algorithm narrated by the pseudocode below:
result = N
for i = 1 to N
flips_needed = 0
for a in A:
flips_needed += <number of flips needed to make sure largest group remaining in a is of size i>
if k >= flips_needed:
result = flips_needed
break
return result
N is the number of bits in the entire initial sequence.
The algorithm above works by dividing the groups of 1 into sizes of at most i. Whenever doing that requires <= k, we have the result we are looking for, as i starts from 1 and goes up. (i.e. when we found flips_needed <= k, we know the groups of 1 are as minimal as they can get)
I am working on a component of a research tool;
I am interested in retrieving (for QF_LRA)
-multiple (minimal or otherwise) UNSAT cores and
-multiple SAT assignments
I have checked the forum for earlier discussions on this topic e.g.,
How to get different unsat cores when using z3 on logic QF_LRA
They refer to the z3 Python tutorial(s)
e.g, http://rise4fun.com/Z3Py/tutorial/musmss
which seems to be offline for now. I have tried other suggestions of github etc to find the mentioned tutorial, but have had no luck.
I am using the z3 Java API; but happy to switch to alternatives.
Here is the tutorial. You can find more information on MARCO
from Mark Liffiton's web pages.
Enumeration of Minimal Unsatisfiable Cores and Maximal Satisfying Subsets
This tutorial illustrates how to use Z3 for extracting all minimal unsatisfiable cores
together with all maximal satisfying subsets.
Origin
The algorithm that we describe next
represents the essence of the core extraction procedure by Liffiton and Malik and independently
by Previti and Marques-Silva:
Enumerating Infeasibility: Finding Multiple MUSes Quickly
Mark H. Liffiton and Ammar Malik
in Proc. 10th International Conference on Integration of Artificial
Intelligence (AI) and Operations Research (OR) techniques in Constraint Programming (CPAIOR-2013), 160-175, May 2013.
Partial MUS Enumeration
Alessandro Previti, Joao Marques-Silva
in Proc. AAAI-2013 July 2013
Z3py Features
This implementation contains no tuning.
It was contributed by Mark Liffiton and it is a simplification of one of the versions available from
his Marco Polo Web site.
Code for eMUS is also available.
The example illustrates the following features of Z3's Python-based API:
Using assumptions to track unsatisfiable cores.
Using multiple solvers and passing constraints between them.
Calling the C-based API from Python. Not all API functions are supported over the Python
wrappers. This example shows how to get a unique integer identifier of an AST,
which can be used as a key in a hash-table.
Idea of the Algorithm
The main idea of the algorithm is to maintain two
logical contexts and exchange information between them:
The MapSolver is used to enumerate sets of clauses that are not already
supersets of an existing unsatisfiable core and not already a subset of a maximal satisfying assignment.
The MapSolver uses one unique atomic predicate per soft clause, so it enumerates
sets of atomic predicates. For each minimal unsatisfiable core, say, represented by predicates
p1, p2, p5, the MapSolver contains the
clause ¬ p1 ∨ ¬ p2 ∨ ¬ p5.
For each maximal satisfiable subset, say, represented by predicats
p2, p3, p5, the
MapSolver contains a clause corresponding to the disjunction of all literals
not in the maximal satisfiable subset, p1 ∨ p4 ∨ p6.
The SubsetSolver contains a set
of soft clauses (clauses with the unique indicator atom occurring negated).
The MapSolver feeds it a set of clauses (the indicator atoms).
Recall that these are not already a superset of an existing minimal
unsatisfiable core, or a subset of a maximal satisfying assignment.
If asserting these atoms makes the SubsetSolver context infeasible,
then it finds a minimal unsatisfiable subset corresponding to these atoms.
If asserting the atoms is consistent with the SubsetSolver, then
it extends this set of atoms maximally to a satisfying set.
from Z3 import *
def main():
x, y = Reals('x y')
constraints = [x > 2, x < 1, x < 0, Or(x + y > 0, y < 0), Or(y >= 0, x >= 0), Or(y < 0, x < 0), Or(y > 0, x < 0)]
csolver = SubsetSolver(constraints)
msolver = MapSolver(n=csolver.n)
for orig, lits in enumerate_sets(csolver, msolver):
output = "%s %s" % (orig, lits)
print(output)
def get_id(x):
return Z3_get_ast_id(x.ctx.ref(),x.as_ast())
def MkOr(clause):
if clause == []:
return False
else:
return Or(clause)
SubsetSolver:
class SubsetSolver:
constraints = []
n = 0
s = Solver()
varcache = {}
idcache = {}
def __init__(self, constraints):
self.constraints = constraints
self.n = len(constraints)
for i in range(self.n):
self.s.add(Implies(self.c_var(i), constraints[i]))
def c_var(self, i):
if i not in self.varcache:
v = Bool(str(self.constraints[abs(i)]))
self.idcache[get_id(v)] = abs(i)
if i >= 0:
self.varcache[i] = v
else:
self.varcache[i] = Not(v)
return self.varcache[i]
def check_subset(self, seed):
assumptions = self.to_c_lits(seed)
return (self.s.check(assumptions) == sat)
def to_c_lits(self, seed):
return [self.c_var(i) for i in seed]
def complement(self, aset):
return set(range(self.n)).difference(aset)
def seed_from_core(self):
core = self.s.unsat_core()
return [self.idcache[get_id(x)] for x in core]
def shrink(self, seed):
current = set(seed)
for i in seed:
if i not in current:
continue
current.remove(i)
if not self.check_subset(current):
current = set(self.seed_from_core())
else:
current.add(i)
return current
def grow(self, seed):
current = seed
for i in self.complement(current):
current.append(i)
if not self.check_subset(current):
current.pop()
return current
MapSolver:
class MapSolver:
def __init__(self, n):
"""Initialization.
Args:
n: The number of constraints to map.
"""
self.solver = Solver()
self.n = n
self.all_n = set(range(n)) # used in complement fairly frequently
def next_seed(self):
"""Get the seed from the current model, if there is one.
Returns:
A seed as an array of 0-based constraint indexes.
"""
if self.solver.check() == unsat:
return None
seed = self.all_n.copy() # default to all True for "high bias"
model = self.solver.model()
for x in model:
if is_false(model[x]):
seed.remove(int(x.name()))
return list(seed)
def complement(self, aset):
"""Return the complement of a given set w.r.t. the set of mapped constraints."""
return self.all_n.difference(aset)
def block_down(self, frompoint):
"""Block down from a given set."""
comp = self.complement(frompoint)
self.solver.add( MkOr( [Bool(str(i)) for i in comp] ) )
def block_up(self, frompoint):
"""Block up from a given set."""
self.solver.add( MkOr( [Not(Bool(str(i))) for i in frompoint] ) )
def enumerate_sets(csolver, map):
"""Basic MUS/MCS enumeration, as a simple example."""
while True:
seed = map.next_seed()
if seed is None:
return
if csolver.check_subset(seed):
MSS = csolver.grow(seed)
yield ("MSS", csolver.to_c_lits(MSS))
map.block_down(MSS)
else:
MUS = csolver.shrink(seed)
yield ("MUS", csolver.to_c_lits(MUS))
map.block_up(MUS)
main()
I need to write a function that takes 4 bytes as input, performs a reversible linear transformation on this, and returns it as 4 bytes.
But wait, there is more: it also has to be distributive, so changing one byte on the input should affect all 4 output bytes.
The issues:
if I use multiplication it won't be reversible after it is modded 255 via the storage as a byte (and its needs to stay as a byte)
if I use addition it can't be reversible and distributive
One solution:
I could create an array of bytes 256^4 long and fill it in, in a one to one mapping, this would work, but there are issues: this means I have to search a graph of size 256^8 due to having to search for free numbers for every value (should note distributivity should be sudo random based on a 64*64 array of byte). This solution also has the MINOR (lol) issue of needing 8GB of RAM, making this solution nonsense.
The domain of the input is the same as the domain of the output, every input has a unique output, in other words: a one to one mapping. As I noted on "one solution" this is very possible and I have used that method when a smaller domain (just 256) was in question. The fact is, as numbers get big that method becomes extraordinarily inefficient, the delta flaw was O(n^5) and omega was O(n^8) with similar crappiness in memory usage.
I was wondering if there was a clever way to do it. In a nutshell, it's a one to one mapping of domain (4 bytes or 256^4). Oh, and such simple things as N+1 can't be used, it has to be keyed off a 64*64 array of byte values that are sudo random but recreatable for reverse transformations.
Balanced Block Mixers are exactly what you're looking for.
Who knew?
Edit! It is not possible, if you indeed want a linear transformation. Here's the mathy solution:
You've got four bytes, a_1, a_2, a_3, a_4, which we'll think of as a vector a with 4 components, each of which is a number mod 256. A linear transformation is just a 4x4 matrix M whose elements are also numbers mod 256. You have two conditions:
From Ma, we can deduce a (this means that M is an invertible matrix).
If a and a' differ in a single coordinate, then Ma and Ma' must differ in every coordinate.
Condition (2) is a little trickier, but here's what it means. Since M is a linear transformation, we know that
M(a - a) = Ma - Ma'
On the left, since a and a' differ in a single coordinate, a - a has exactly one nonzero coordinate. On the right, since Ma and Ma' must differ in every coordinate, Ma - Ma' must have every coordinate nonzero.
So the matrix M must take a vector with a single nonzero coordinate to one with all nonzero coordinates. So we just need every entry of M to be a non-zero-divisor mod 256, i.e., to be odd.
Going back to condition (1), what does it mean for M to be invertible? Since we're considering it mod 256, we just need its determinant to be invertible mod 256; that is, its determinant must be odd.
So you need a 4x4 matrix with odd entries mod 256 whose determinant is odd. But this is impossible! Why? The determinant is computed by summing various products of entries. For a 4x4 matrix, there are 4! = 24 different summands, and each one, being a product of odd entries, is odd. But the sum of 24 odd numbers is even, so the determinant of such a matrix must be even!
Here are your requirements as I understand them:
Let B be the space of bytes. You want a one-to-one (and thus onto) function f: B^4 -> B^4.
If you change any single input byte, then all output bytes change.
Here's the simplest solution I have thusfar. I have avoided posting for a while because I kept trying to come up with a better solution, but I haven't thought of anything.
Okay, first of all, we need a function g: B -> B which takes a single byte and returns a single byte. This function must have two properties: g(x) is reversible, and x^g(x) is reversible. [Note: ^ is the XOR operator.] Any such g will do, but I will define a specific one later.
Given such a g, we define f by f(a,b,c,d) = (a^b^c^d, g(a)^b^c^d, a^g(b)^c^d, a^b^g(c)^d). Let's check your requirements:
Reversible: yes. If we XOR the first two output bytes, we get a^g(a), but by the second property of g, we can recover a. Similarly for the b and c. We can recover d after getting a,b, and c by XORing the first byte with (a^b^c).
Distributive: yes. Suppose b,c, and d are fixed. Then the function takes the form f(a,b,c,d) = (a^const, g(a)^const, a^const, a^const). If a changes, then so will a^const; similarly, if a changes, so will g(a), and thus so will g(a)^const. (The fact that g(a) changes if a does is by the first property of g; if it didn't then g(x) wouldn't be reversible.) The same holds for b and c. For d, it's even easier because then f(a,b,c,d) = (d^const, d^const, d^const, d^const) so if d changes, every byte changes.
Finally, we construct such a function g. Let T be the space of two-bit values, and h : T -> T the function such that h(0) = 0, h(1) = 2, h(2) = 3, and h(3) = 1. This function has the two desired properties of g, namely h(x) is reversible and so is x^h(x). (For the latter, check that 0^h(0) = 0, 1^h(1) = 3, 2^h(2) = 1, and 3^h(3) = 2.) So, finally, to compute g(x), split x into four groups of two bits, and take h of each quarter separately. Because h satisfies the two desired properties, and there's no interaction between the quarters, so does g.
I'm not sure I understand your question, but I think I get what you're trying to do.
Bitwise Exclusive Or is your friend.
If R = A XOR B, R XOR A gives B and R XOR B gives A back. So it's a reversible transformation, assuming you know the result and one of the inputs.
Assuming I understood what you're trying to do, I think any block cipher will do the job.
A block cipher takes a block of bits (say 128) and maps them reversibly to a different block with the same size.
Moreover, if you're using OFB mode you can use a block cipher to generate an infinite stream of pseudo-random bits. XORing these bits with your stream of bits will give you a transformation for any length of data.
I'm going to throw out an idea that may or may not work.
Use a set of linear functions mod 256, with odd prime coefficients.
For example:
b0 = 3 * a0 + 5 * a1 + 7 * a2 + 11 * a3;
b1 = 13 * a0 + 17 * a1 + 19 * a2 + 23 * a3;
If I remember the Chinese Remainder Theorem correctly, and I haven't looked at it in years, the ax are recoverable from the bx. There may even be a quick way to do it.
This is, I believe, a reversible transformation. It's linear, in that af(x) mod 256 = f(ax) and f(x) + f(y) mod 256 = f(x + y). Clearly, changing one input byte will change all the output bytes.
So, go look up the Chinese Remainder Theorem and see if this works.
What you mean by "linear" transformation?
O(n), or a function f with f(c * (a+b)) = c * f(a) + c * f(b)?
An easy approach would be a rotating bitshift (not sure if this fullfils the above math definition). Its reversible and every byte can be changed. But with this it does not enforce that every byte is changed.
EDIT: My solution would be this:
b0 = (a0 ^ a1 ^ a2 ^ a3)
b1 = a1 + b0 ( mod 256)
b2 = a2 + b0 ( mod 256)
b3 = a3 + b0 ( mod 256)
It would be reversible (just subtract the first byte from the other, and then XOR the 3 resulting bytes on the first), and a change in one bit would change every byte (as b0 is the result of all bytes and impacts all others).
Stick all of the bytes into 32-bit number and then do a shl or shr (shift left or shift right) by one, two or three. Then split it back into bytes (could use a variant record). This will move bits from each byte into the adjacent byte.
There are a number of good suggestions here (XOR, etc.) I would suggest combining them.
You could remap the bits. Let's use ii for input and oo for output:
oo[0] = (ii[0] & 0xC0) | (ii[1] & 0x30) | (ii[2] & 0x0C) | (ii[3] | 0x03)
oo[1] = (ii[0] & 0x30) | (ii[1] & 0x0C) | (ii[2] & 0x03) | (ii[3] | 0xC0)
oo[2] = (ii[0] & 0x0C) | (ii[1] & 0x03) | (ii[2] & 0xC0) | (ii[3] | 0x30)
oo[3] = (ii[0] & 0x03) | (ii[1] & 0xC0) | (ii[2] & 0x30) | (ii[3] | 0x0C)
It's not linear, but significantly changing one byte in the input will affect all the bytes in the output. I don't think you can have a reversible transformation such as changing one bit in the input will affect all four bytes of the output, but I don't have a proof.