I have a set of inequalities that I want to find a (trivial) solution.
When I use the Exists operator, everything works great, as you can see in this Z3 script and in its Z3Py version.
#!/bin/python
from z3 import *
# we have that
s = Solver()
## mu0_px is the initial marking for place px;
mu_p1, mu_p2, mu_p3 = 0, 0, 1
## pi_tj is the pre-condition from place pi to transition tj
p1_t1, p1_t2, p1_t3 = 1, 0, 0
p2_t1, p2_t2, p2_t3 = 0, 1, 0
p3_t1, p3_t2, p3_t3 = 0, 0, 1
## tj_pi is the post-condition from transition tj to place pi
t1_p1, t2_p1, t3_p1 = 0, 1, 0
t1_p2, t2_p2, t3_p2 = 1, 0, 0
t1_p3, t2_p3, t3_p3 = 0, 0, 0
## find the values for the faulty transitions
f_p1, p1_f = Ints('f_p1 p1_f')
f_p2, p2_f = Ints('f_p2 p2_f')
f_p3, p3_f = Ints('f_p3 p3_f')
# where they should be
s.add( f_p1 == 1, f_p2 == 0, f_p3 == 0 )
s.add( p1_f == 0, p2_f == 0, p3_f == 1 )
## l \in Naturals ;
l11 = Int('l11')
# Sequence 11: t1,t2,t3
s11_t1, s11_t2, s11_t3 = 1, 1, 0
# It does works! :o
s.add( l11 == 1 )
s.add(
Exists([l11],
Or(
mu_p1 + (t1_p1-p1_t1)*s11_t1 + (t2_p1-p1_t2)*s11_t2 + (t3_p1-p1_t3)*s11_t3 + l11 * (f_p1 - p1_f) < p1_t3,
mu_p2 + (t1_p2-p2_t1)*s11_t1 + (t2_p2-p2_t2)*s11_t2 + (t3_p2-p2_t3)*s11_t3 + l11 * (f_p2 - p2_f) < p2_t3,
mu_p3 + (t1_p3-p3_t1)*s11_t1 + (t2_p3-p3_t2)*s11_t2 + (t3_p3-p3_t3)*s11_t3 + l11 * (f_p3 - p3_f) < p3_t3,
)
)
)
print(s)
print(s.check())
print(s.model())
However, when I replace the existential quantifier by Forall as in this link, and in the Python code below, there is no solution when I believe that it should still be sat.
#!/bin/python
from z3 import *
# we have that
s = Solver()
## mu0_px is the initial marking for place px;
mu_p1, mu_p2, mu_p3 = 0, 0, 1
## pi_tj is the pre-condition from place pi to transition tj
p1_t1, p1_t2, p1_t3 = 1, 0, 0
p2_t1, p2_t2, p2_t3 = 0, 1, 0
p3_t1, p3_t2, p3_t3 = 0, 0, 1
## tj_pi is the post-condition from transition tj to place pi
t1_p1, t2_p1, t3_p1 = 0, 1, 0
t1_p2, t2_p2, t3_p2 = 1, 0, 0
t1_p3, t2_p3, t3_p3 = 0, 0, 0
## find the values for the faulty transitions
f_p1, p1_f = Ints('f_p1 p1_f')
f_p2, p2_f = Ints('f_p2 p2_f')
f_p3, p3_f = Ints('f_p3 p3_f')
# where they should be
s.add( f_p1 == 1, f_p2 == 0, f_p3 == 0 )
s.add( p1_f == 0, p2_f == 0, p3_f == 1 )
## l \in Naturals ;
l11 = Int('l11')
# Sequence 11: t1,t2,t3
s11_t1, s11_t2, s11_t3 = 1, 1, 0
# It does not work! :(
s.add( l11 == 1 )
s.add(
ForAll([l11],
Or(
mu_p1 + (t1_p1-p1_t1)*s11_t1 + (t2_p1-p1_t2)*s11_t2 + (t3_p1-p1_t3)*s11_t3 + l11 * (f_p1 - p1_f) < p1_t3,
mu_p2 + (t1_p2-p2_t1)*s11_t1 + (t2_p2-p2_t2)*s11_t2 + (t3_p2-p2_t3)*s11_t3 + l11 * (f_p2 - p2_f) < p2_t3,
mu_p3 + (t1_p3-p3_t1)*s11_t1 + (t2_p3-p3_t2)*s11_t2 + (t3_p3-p3_t3)*s11_t3 + l11 * (f_p3 - p3_f) < p3_t3,
)
)
)
print(s)
print(s.check())
print(s.model())
Did anyone ever have a problem like this before?
The variable l11 as you declared and the one that gets used in the quantification are totally different: In particular, you stating it equals 1 have no bearing in the quantified formula. So you get sat with existential but unsat with universal since the formula is clearly not true for all values of l11.
This might be confusing, but it is the intended behaviour. To see the effect, simply print the smtlib equivalent and you’ll see how the variables are assigned.
Related
Is there a simple algorithm to create a crc8 checksum from a table in lua?
Polynomial should be x^8+x^5+x^4+1 (0x31)
This algorithm will be used to check the UID of the DS28CM00 UID-chip.
Here you can find a table returned by the chip (LS-byte last) :
table = {112,232,9,80,1,0,0}
Thanks for any help
For Lua 5.3+
local function crc8(t)
local c = 0
for _, b in ipairs(t) do
for i = 0, 7 do
c = c >> 1 ~ ((c ~ b >> i) & 1) * 0x8C
end
end
return c
end
print(crc8{112, 232, 9, 80, 1, 0, 0}) --> 219
print(crc8{2, 0x1C, 0xB8, 1, 0, 0, 0}) --> 0xA2 as in example from AN-27
For Lua 5.2-
local function crc8(t)
local c = 0
for _, b in ipairs(t) do
for i = 0, 7 do
local c0 = c % 2
local b0 = b % 2
c = (c - c0) / 2
b = (b - b0) / 2
if c0 + b0 == 1 then
c = c + 0x80 + (c % 16 < 8 and 8 or -8) + (c % 8 < 4 and 4 or -4)
end
end
end
return c
end
Here I use the cvxpy solver to solve a problem. But the Problem does not follow DCP rules.
The objective is
import cvxpy as cp
import numpy as np
def bit_rate(alpha, beta, p, w):
return alpha * w * cp.log(1 + beta * p / w)
# Create scalar optimization variables.
p_1 = cp.Variable()
p_2 = cp.Variable()
p_3 = cp.Variable()
w_1 = cp.Variable()
w_2 = cp.Variable()
w_3 = cp.Variable()
r_1 = bit_rate(2, 2, p_1, w_1)
r_2 = bit_rate(2.4, 2.4, p_2, w_2)
r_3 = bit_rate(2.8, 2.8, p_3, w_3)
# Create constraints.
constraints = [p_1 + p_2 + p_3 == 0.5,
w_1 + w_2 + w_3 == 1,
p_1 >= 0, p_2 >= 0, p_3 >= 0,
w_1 >= 0, w_2 >= 0, w_3 >= 0]
# Form objective.
obj = cp.Maximize(r_1 + r_2 + r_3)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve() # Returns the optimal value.
print("status:", prob.status)
print("optimal value", prob.value)
print("p optimal var", p_1.value, p_2.value, p_3.value)
print("W optimal var", w_1.value, w_2.value, w_3.value)
The p and w are variables. I wanna ask how to convert the problem to be a DCP problem?THANKS!
The bug is
The objective is not DCP. Its following subexpressions are not:
2.0 * var0 / var3
2.4 * var1 / var4
2.8 * var2 / var5
I am trying to implement Horn-Schunck optical flow algorithm by NumPy and OpenCV
I use Horn-Schunck method on wiki and original paper
But my implementation fails on following simple example
Frame1:
[[ 0 0 0 0 0 0 0 0 0 0]
[ 0 255 255 0 0 0 0 0 0 0]
[ 0 255 255 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]]
Frame2:
[[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 255 255 0 0 0 0 0]
[ 0 0 0 255 255 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]]
This is just small white rectangle that moves by 2 pixels on frame2
My implementation produce following flow
U part of flow (I apply np.round to every part of flow. Original values is pretty the same):
[[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
V part of flow:
[[ 0. 1. 0. -1. -0. 0. 0. 0. 0. 0.]
[-0. -0. 0. 0. 0. 0. 0. 0. 0. 0.]
[-0. -1. -0. 1. 0. 0. 0. 0. 0. 0.]
[-0. -0. -0. 0. 0. 0. 0. 0. 0. 0.]
[-0. -0. -0. 0. 0. 0. 0. 0. 0. 0.]]
It look like this flow is incorrect (Because if i move every pixel of frame2 in direction of corresponding flow component i never get frame1)
Also my implementation fails on real images
But if i move rectangle by 1 pixel right (or left or top or down) my implementation produce:
U part of flow:
[[1 1 1 .....]
[1 1 1 .....]
......
[1 1 1 .....]]
V part of flow:
[[0 0 0 .....]
[0 0 0 .....]
......
[0 0 0 .....]]
I suppose that this flow is correct because i can reconstruct frame 1 by following procedure
def translateBrute(img, u, v):
res = np.zeros_like(img)
u = np.round(u).astype(np.int)
v = np.round(v).astype(np.int)
for i in xrange(img.shape[0]):
for j in xrange(img.shape[1]):
res[i, j] = takePixel(img, i + v[i, j], j + u[i, j])
return res
where takePixel is simple function that returns pixel intensity if input coordinates lays inside of image or intensity on image border otherwise
This is my implementation
import cv2
import sys
import numpy as np
def takePixel(img, i, j):
i = i if i >= 0 else 0
j = j if j >= 0 else 0
i = i if i < img.shape[0] else img.shape[0] - 1
j = j if j < img.shape[1] else img.shape[1] - 1
return img[i, j]
#Numerical derivatives from original paper: http://people.csail.mit.edu/bkph/papers/Optical_Flow_OPT.pdf
def xDer(img1, img2):
res = np.zeros_like(img1)
for i in xrange(res.shape[0]):
for j in xrange(res.shape[1]):
sm = 0
sm += takePixel(img1, i, j + 1) - takePixel(img1, i, j)
sm += takePixel(img1, i + 1, j + 1) - takePixel(img1, i + 1, j)
sm += takePixel(img2, i, j + 1) - takePixel(img2, i, j)
sm += takePixel(img2, i + 1, j + 1) - takePixel(img2, i + 1, j)
sm /= 4.0
res[i, j] = sm
return res
def yDer(img1, img2):
res = np.zeros_like(img1)
for i in xrange(res.shape[0]):
for j in xrange(res.shape[1]):
sm = 0
sm += takePixel(img1, i + 1, j ) - takePixel(img1, i, j )
sm += takePixel(img1, i + 1, j + 1) - takePixel(img1, i, j + 1)
sm += takePixel(img2, i + 1, j ) - takePixel(img2, i, j )
sm += takePixel(img2, i + 1, j + 1) - takePixel(img2, i, j + 1)
sm /= 4.0
res[i, j] = sm
return res
def tDer(img, img2):
res = np.zeros_like(img)
for i in xrange(res.shape[0]):
for j in xrange(res.shape[1]):
sm = 0
for ii in xrange(i, i + 2):
for jj in xrange(j, j + 2):
sm += takePixel(img2, ii, jj) - takePixel(img, ii, jj)
sm /= 4.0
res[i, j] = sm
return res
averageKernel = np.array([[ 0.08333333, 0.16666667, 0.08333333],
[ 0.16666667, 0. , 0.16666667],
[ 0.08333333, 0.16666667, 0.08333333]], dtype=np.float32)
#average intensity around flow in point i,j. I use filter2D to improve performance.
def average(img):
return cv2.filter2D(img.astype(np.float32), -1, averageKernel)
def translateBrute(img, u, v):
res = np.zeros_like(img)
u = np.round(u).astype(np.int)
v = np.round(v).astype(np.int)
for i in xrange(img.shape[0]):
for j in xrange(img.shape[1]):
res[i, j] = takePixel(img, i + v[i, j], j + u[i, j])
return res
#Core of algorithm. Iterative scheme from wiki: https://en.wikipedia.org/wiki/Horn%E2%80%93Schunck_method#Mathematical_details
def hornShunckFlow(img1, img2, alpha):
img1 = img1.astype(np.float32)
img2 = img2.astype(np.float32)
Idx = xDer(img1, img2)
Idy = yDer(img1, img2)
Idt = tDer(img1, img2)
u = np.zeros_like(img1)
v = np.zeros_like(img1)
#100 iterations enough for small example
for iteration in xrange(100):
u0 = np.copy(u)
v0 = np.copy(v)
uAvg = average(u0)
vAvg = average(v0)
# '*', '+', '/' operations in numpy works component-wise
u = uAvg - 1.0/(alpha**2 + Idx**2 + Idy**2) * Idx * (Idx * uAvg + Idy * vAvg + Idt)
v = vAvg - 1.0/(alpha**2 + Idx**2 + Idy**2) * Idy * (Idx * uAvg + Idy * vAvg + Idt)
if iteration % 10 == 0:
print 'iteration', iteration, np.linalg.norm(u - u0) + np.linalg.norm(v - v0)
return u, v
if __name__ == '__main__':
img1c = cv2.imread(sys.argv[1])
img2c = cv2.imread(sys.argv[2])
img1g = cv2.cvtColor(img1c, cv2.COLOR_BGR2GRAY)
img2g = cv2.cvtColor(img2c, cv2.COLOR_BGR2GRAY)
u, v = hornShunckFlow(img1g, img2g, 0.1)
imgRes = translateBrute(img2g, u, v)
cv2.imwrite('res.png', imgRes)
print img1g
print translateBrute(img2g, u, v)
Optimization scheme are taken from wikipedia and numerical derivatives are taken from original paper.
Anyone have idea why my implementation produce incorrect flow?
I can provide any additional info if it necessary
PS Sorry for my poor english
UPD:
I implement Horn-Schunck cost function
def grad(img):
Idx = cv2.filter2D(img, -1, np.array([
[-1, -2, -1],
[ 0, 0, 0],
[ 1, 2, 1]], dtype=np.float32))
Idy = cv2.filter2D(img, -1, np.array([
[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]], dtype=np.float32))
return Idx, Idy
def hornShunckCost(Idx, Idy, Idt, u, v, alpha):
#return sum(sum(It**2))
udx, udy = grad(u)
vdx, vdy = grad(v)
return (sum(sum((Idx*u + Idy*v + Idt)**2)) +
(alpha**2)*(sum(sum(udx**2)) +
sum(sum(udy**2)) +
sum(sum(vdx**2)) +
sum(sum(vdy**2))
))
and check value of this function inside iterations
if iteration % 10 == 0:
print 'iter', iteration, np.linalg.norm(u - u0) + np.linalg.norm(v - v0)
print hornShunckCost(Idx, Idy, Idt, u, v, alpha)
If i use simple example with rectangle that has been moved by one pixel everything is ok: value of cost function decrease at every step.
But on example with rectangle that has been moved by two pixels value of cost function increase at every step.
This behaviour of algorithm is really strange
Maybe i choose incorrect way to calculate cost function.
I lost a fact that classic Horn-Schunck scheme uses linearized data term (I1(x, y) - I2(x + u(x, y), y + v(x, y))). This linearization make optimization easy but disallows large displacements
To handle big displacements there are next approach Pyramidal Horn-Schunck
I'm trying to use Pybrain to predict sequences of characters belonging to the Reber grammar.
Concretely what I'm doing is generating strings using the Reber grammar graph (you can check it here : http://www.felixgers.de/papers/phd.pdf page 22). An example of such string could be BPVVE. I want my neural network to learn the underlying rules of the grammar. For each of these string I create a sequence that would typically look like this :
[B, T, S, X, P, V, E,] , [B, T, S, X, P, V, E,]
B -> value = [1, 0, 0, 0, 0, 0, 0,] , target = [0, 0, 0, 0, 1, 0, 0,]
P -> value = [0, 0, 0, 0, 1, 0, 0,] , target = [0, 0, 0, 0, 0, 1, 0,]
V -> value = [0, 0, 0, 0, 0, 1, 0,] , target = [0, 0, 0, 0, 0, 1, 0,]
V -> value = [0, 0, 0, 0, 0, 1, 0,] , target = [0, 0, 0, 0, 0, 0, 1,]
E -> E is ignored for now because it marks the end
as you can see the value is just a 7-d vector representing the current letter and the target is the next letter in the Reber word.
Here is the code I'm trying to run :
#!/usr/bin/python
import reberGrammar as reber
import random as rnd
from pylab import *
from pybrain.supervised import RPropMinusTrainer
from pybrain.supervised import BackpropTrainer
from pybrain.datasets import SequenceClassificationDataSet
from pybrain.structure.modules import LSTMLayer, SoftmaxLayer
from pybrain.tools.validation import testOnSequenceData
from pybrain.tools.shortcuts import buildNetwork
def reberToListInt(word): #e.g. "BPVVE" -> [0,4,3,3,5]
out = [None]*len(word)
for i,l in enumerate(word):
if l == 'B':
out[i] = 0
elif l == 'T':
out[i] = 1
elif l == 'S':
out[i] = 2
elif l == 'V':
out[i] = 3
elif l == 'P':
out[i] = 4
elif l == 'E':
out[i] = 5
else :
out[i] = 6
return out
def buildReberDataSet(numSample):
"""Generate a 7 class dataset"""
reberLexicon = reber.ReberGrammarLexicon(numSample)
DS = SequenceClassificationDataSet(7, 7, nb_classes=7)
for rw in reberLexicon.lexicon:
DS.newSequence()
rw2 = reberToListInt(rw)
for i in range(len(rw2)-1): #inserting one letter at a time
inpt = outpt = [0.0]*7
inpt[rw2[i]]=1.0
outpt[rw2[i+1]]=1.0
DS.addSample(inpt,outpt)
return DS
def printDataSet(DS, numLines): #just to print some stat
print "\t############"
print "Number of sequences: ",DS.getNumSequences()
print "Input and output dimensions: ", DS.indim,"\t", DS.outdim
print "\n"
for i in range(numLines):
for inp, target in DS.getSequenceIterator(i):
print inp,
print "\n"
print "\t#############"
'''Dataset creation / split into training and test sets'''
fullDS = buildReberDataSet(700)
tstdata, trndata = fullDS.splitWithProportion( 0.25 )
trndata._convertToOneOfMany( bounds=[0.,1.])
tstdata._convertToOneOfMany( bounds=[0.,1.])
#printDataSet(trndata,2)
'''Network setup / training'''
rnn = buildNetwork( trndata.indim, 7, trndata.outdim, hiddenclass=LSTMLayer, outclass=SoftmaxLayer, outputbias=False, recurrent=True)
trainer = RPropMinusTrainer( rnn, dataset=trndata, verbose=True )
#trainer = BackpropTrainer( rnn, dataset=trndata, verbose=True, momentum=0.9, learningrate=0.5 )
trainError=[]
testError =[]
#errors = trainer.trainUntilConvergence()
for i in range(9):
trainer.trainEpochs( 2 )
trainError.append(100. * (1.0-testOnSequenceData(rnn, trndata)))
testError.append(100. * (1.0-testOnSequenceData(rnn, tstdata)))
print "train error: %5.2f%%" % trainError[i], ", test error: %5.2f%%" % testError[i]
plot(trainError)
hold(True)
plot(testError)
show()
I fail to train this net. The errors are fluctuating a lot and there is no real convergence. I would really appreciate some advises on this.
Here is the code I'm using to generate Reber strings :
#!/usr/bin/python
import random as rnd
class ReberGrammarLexicon(object):
lexicon = set() #contain Reber words
graph = [ [(1,'T'), (5,'P')], \
[(1, 'S'), (2, 'X')], \
[(3,'S') ,(5, 'X')], \
[(6, 'E')], \
[(3, 'V'),(2, 'P')], \
[(4, 'V'), (5, 'T')] ] #store the graph
def __init__(self, num, maxSize = 1000): #fill Lexicon with num words
self.maxSize = maxSize
if maxSize < 5:
raise NameError('maxSize too small, require maxSize > 4')
while len(self.lexicon) < num:
word = self.generateWord()
if word != None:
self.lexicon.add(word)
def generateWord(self): #generate one word
c = 2
currentEdge = 0
word = 'B'
while c <= self.maxSize:
inc = rnd.randint(0,len(self.graph[currentEdge])-1)
nextEdge = self.graph[currentEdge][inc][0]
word += self.graph[currentEdge][inc][1]
currentEdge = nextEdge
if currentEdge == 6 :
break
c+=1
if c > self.maxSize :
return None
return word
Thanks,
Best
I have the following binary clock that I grabbed from this wiki article (the one that's for v1.5.*) for the awesome WM:
binClock = wibox.widget.base.make_widget()
binClock.radius = 1.5
binClock.shift = 1.8
binClock.farShift = 2
binClock.border = 1
binClock.lineWidth = 1
binClock.colorActive = beautiful.bg_focus
binClock.fit = function(binClock, width, height)
local size = math.min(width, height)
return 6 * 2 * binClock.radius + 5 * binClock.shift + 2 * binClock.farShift + 2 * binClock.border + 2 * binClock.border, size
end
binClock.draw = function(binClock, wibox, cr, width, height)
local curTime = os.date("*t")
local column = {}
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.hour), 1, 1))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.hour), 2, 2))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.min), 1, 1))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.min), 2, 2))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.sec), 1, 1))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.sec), 2, 2))))
local bigColumn = 0
for i = 0, 5 do
if math.floor(i / 2) > bigColumn then
bigColumn = bigColumn + 1
end
for j = 0, 3 do
if string.sub(column[i + 1], j + 1, j + 1) == "0" then
active = false
else
active = true
end
binClock:draw_point(cr, bigColumn, i, j, active)
end
end
end
binClock.dec_bin = function(binClock, inNum)
inNum = tonumber(inNum)
local base, enum, outNum, rem = 2, "01", "", 0
while inNum > (base - 1) do
inNum, rem = math.floor(inNum / base), math.fmod(inNum, base)
outNum = string.sub(enum, rem + 1, rem + 1) .. outNum
end
outNum = inNum .. outNum
return outNum
end
binClock.draw_point = function(binClock, cr, bigColumn, column, row, active)
cr:arc(binClock.border + column * (2 * binClock.radius + binClock.shift) + bigColumn * binClock.farShift + binClock.radius,
binClock.border + row * (2 * binClock.radius + binClock.shift) + binClock.radius, 2, 0, 2 * math.pi)
if active then
cr:set_source_rgba(0, 0.5, 0, 1)
else
cr:set_source_rgba(0.5, 0.5, 0.5, 1)
end
cr:fill()
end
binClocktimer = timer { timeout = 1 }
binClocktimer:connect_signal("timeout", function() binClock:emit_signal("widget::updated") end)
binClocktimer:start()
First, if something isn't by default already in Lua that's because this is to be used in the config file for awesome. :)
OK, so what I need is some guidance actually. I am not very familiar with Lua currently, so some guidance is all I ask so I can learn. :)
OK, so first, this code outputs a normal binary clock, but every column has 4 dots (44,44,44), instead of a 23,34,34 setup for the dots, as it would be in a normal binary clock. What's controlling that in this code? So that I can pay around with it.
Next, what controls the color? Right now it's gray background and quite a dark green, I want to brighten both of those up.
And what controls the smoothing? Right now it's outputting circles, would like to see what it's like for it to output squares instead.
That's all I need help with, if you can point me to the code and some documentation for what I need, that should be more than enough. :)
Also, if somebody would be nice enough to add some comments, that also would be awesome. Don't have to be very detailed comments, but at least to the point where it gives an idea of what each thing does. :)
EDIT:
Found what modifies the colors, so figured that out. None of the first variables control if it's a square or circle BTW. :)
The draw_point function draws the dots.
The two loops in the draw function are what create the output and is where the columns come from. To do a 23/34/34 layout you would need to modify the inner loop skip the first X points based on the counter of the outer loop I believe.