How would I be able to make a register-based virtual machine code off of a Binary Tree for math interpretation - dart

My code is represented in Dart, but this is more general to the Binary Tree data structure and Register-based VM implementation. I have commented the code for you to understand if you do not know Dart as well.
So, here are my nodes:
enum NodeType {
numberNode,
addNode,
subtractNode,
multiplyNode,
divideNode,
plusNode,
minusNode,
}
NumberNode has a number value in it.
AddNode, SubtractNode, MultiplyNode, DivideNode, they are really just Binary Op Nodes .
PlusNode, MinusNode, are Unary Operator nodes.
The tree is generated based off Order of Operations. Unary Operation first, then multiplication and division, and then addition and subtraction. E.g. "1 + 2 * -3" becomes "(1 + (2 * (-3)))"
Here is my code to trying to walk over the AST:
/// Converts tree to Register-based VM code
List<Opcode> convertNodeToCode(Node node) {
List<Opcode> result = [const Opcode(OpcodeKind.loadn, 2, -1)];
bool counterHasBeenZero = false;
bool binOpDebounce = false;
int counter = 0;
List<Opcode> convert(Node node) {
switch (node.nodeType) {
case NodeType.numberNode:
counter = counter == 0 ? 1 : 0;
if (counter == 0 && !counterHasBeenZero) {
counterHasBeenZero = true;
} else {
counter = 1;
}
return [Opcode(OpcodeKind.loadn, counter, (node as NumberNode).value)];
case NodeType.addNode:
var aNode = node as AddNode;
return convert(aNode.nodeA) +
convert(aNode.nodeB) +
[
const Opcode(
OpcodeKind.addn,
0,
1,
)
];
case NodeType.subtractNode:
var sNode = node as SubtractNode;
var result = convert(sNode.nodeA) +
convert(sNode.nodeB) +
(binOpDebounce
? [
const Opcode(
OpcodeKind.subn,
0,
0,
1,
)
]
: [
const Opcode(
OpcodeKind.subn,
0,
1,
)
]);
if (!binOpDebounce) binOpDebounce = true;
return result;
case NodeType.multiplyNode:
var mNode = node as MultiplyNode;
var result = convert(mNode.nodeA) +
convert(mNode.nodeB) +
(binOpDebounce
? [
const Opcode(
OpcodeKind.muln,
0,
0,
1,
)
]
: [
const Opcode(
OpcodeKind.muln,
0,
1,
)
]);
if (!binOpDebounce) binOpDebounce = true;
return result;
case NodeType.divideNode:
var dNode = node as DivideNode;
var result = convert(dNode.nodeA) +
convert(dNode.nodeB) +
(binOpDebounce
? [
const Opcode(
OpcodeKind.divn,
0,
0,
1,
)
]
: [
const Opcode(
OpcodeKind.divn,
0,
1,
)
]);
if (!binOpDebounce) binOpDebounce = true;
return result;
case NodeType.plusNode:
return convert((node as PlusNode).node);
case NodeType.minusNode:
return convert((node as MinusNode).node) +
[Opcode(OpcodeKind.muln, 1, 2)];
default:
throw Exception('Non-existent node type');
}
}
return result + convert(node) + [const Opcode(OpcodeKind.exit)];
}
I tried a method to just use 2-3 registers and using a counter to track where I loaded the number in the register, but the code gets ugly real quick and when I'm trying to do Order of Operations, it gets really hard to track where the numbers are with the counter. Basically, how I tried to make this code work is just store the number in register 1 or 0 and load the number if needed to and add the registers together to equal to register 0. Example, 1 + 2 + 3 + 4 becomes [r2 = -1.0, r1 = 1.0, r0 = 2.0, r0 = r1 + r0, r1 = 3.0, r0 = r1 + r0, r1 = 4.0, r0 = r1 + r0, exit]. When I tried this with multiplication though, this became very hard as it incorrectly multiplied the wrong number which is possibly because of the order of operations.
I tried to see if this way could be done as well:
// (1 + (2 * ((-2) + 3) * 5))
const code = [
// (-2)
Opcode(OpcodeKind.loadn, 1, -2), // r1 = -2;
// (2 + 3)
Opcode(OpcodeKind.loadn, 1, 2), // r1 = 2;
Opcode(OpcodeKind.loadn, 2, 3), // r2 = 3;
Opcode(OpcodeKind.addn, 2, 1, 2), // r2 = r1 + r2;
// (2 * (result) * 5)
Opcode(OpcodeKind.loadn, 1, 2), // r1 = 2;
Opcode(OpcodeKind.loadn, 3, 5), // r3 = 5;
Opcode(OpcodeKind.muln, 2, 1, 2), // r2 = r1 * r2;
Opcode(OpcodeKind.muln, 2, 2, 3), // r2 = r2 * r3;
// (1 + (result))
Opcode(OpcodeKind.loadn, 1, 1), // r1 = 1;
Opcode(OpcodeKind.addn, 1, 1, 2), // r1 = r1 + r2;
Opcode(OpcodeKind.exit), // exit halt
];
I knew this method would not work because if I'm going to iterate through the nodes I need to know the position of the numbers and registers beforehand, so I'd have to use another method or way to find the number/register.
You don't need to read all of above; those were just my attempts to try to produce register-based virtual machine code.
I want to see how you guys would do it or how you would make it.

Related

Unable to understand firstTerm = secondTerm; secondTerm = nextTerm; in fibonacci series

class Main {
public static void main(String[] args) {
int n = 5, firstTerm = 0, secondTerm = 1;
System.out.println("Fibonacci Series till " + n + " terms:");
for (int i = 1; i <= n; ++i) {
System.out.print(firstTerm + " ");
// compute the next term
int nextTerm = firstTerm + secondTerm;
firstTerm = secondTerm;
secondTerm = nextTerm;
}
}
}
//Q) Unable to understand why firstTerm = secondTerm;
secondTerm = nextTerm; statement is written, can anyone explain me this concept
The fibonnaci sequence is defined by
F(0) = 0 // This is our first term
F(1) = 1 // This is the second term
F(n) = F(n - 1) + F(n - 2)
To calculate a term that is neither the first term, nor the second term, we need to sum, the two previous terms.
This is the reason why while iterating, the second term value is assigned to the first term and so on
You will have more details here

Parse int and float values from Uint8List Dart

I'm trying to parse int and double values which I receive from a bluetooth device using this lib: https://github.com/Polidea/FlutterBleLib
I receive the following Uint8List data: 31,212,243,57,0,224,7,1,6,5,9,21,0,1,0,0,0,91,228
I found some help here: How do I read a 16-bit int from a Uint8List in Dart?
On Android I have done some similar work, but the library there had so called Value Interpreter which I only passed the data and received back float/int.
Example code from Android:
int offset = 0;
final double spOPercentage = ValueInterpreter.getFloatValue(value, FORMAT_SFLOAT, offset);
Where value is a byte array
Another example from android code, this code if from the library:
public static Float getFloatValue(#NonNull byte[] value, int formatType, #IntRange(from = 0L) int offset) {
if (offset + getTypeLen(formatType) > value.length) {
return null;
} else {
switch(formatType) {
case 50:
return bytesToFloat(value[offset], value[offset + 1]);
case 52:
return bytesToFloat(value[offset], value[offset + 1], value[offset + 2], value[offset + 3]);
default:
return null;
}
}
}
private static float bytesToFloat(byte b0, byte b1) {
int mantissa = unsignedToSigned(unsignedByteToInt(b0) + ((unsignedByteToInt(b1) & 15) << 8), 12);
int exponent = unsignedToSigned(unsignedByteToInt(b1) >> 4, 4);
return (float)((double)mantissa * Math.pow(10.0D, (double)exponent));
}
private static float bytesToFloat(byte b0, byte b1, byte b2, byte b3) {
int mantissa = unsignedToSigned(unsignedByteToInt(b0) + (unsignedByteToInt(b1) << 8) +
(unsignedByteToInt(b2) << 16), 24);
return (float)((double)mantissa * Math.pow(10.0D, (double)b3));
}
private static int unsignedByteToInt(byte b) {
return b & 255;
}
In flutter/dart I want to write my own value interpreter.
The starting example code is:
int offset = 1;
ByteData bytes = list.buffer.asByteData();
bytes.getUint16(offset);
I don't understand how data is manipulated here in dart to get a int value from different position from data list. I need some explanation how to do this, would be great if anyone can give some teaching about this.
Having the following:
values [31, 212, 243, 57, 0, 224, 7, 1, 6, 5, 9, 21, 0, 1, 0, 0, 0, 91, 228];
index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
When you make:
values.list.buffer.asByteData().getUint16(0);
you interpret [31, 212] as a single unsigned int of two bytes length.
If you want to get a Uint16 from bytes 9 and 10 [5, 9], you'd call:
values.list.buffer.asByteData().getUint16(9);
Regarding your comment (Parse int and float values from Uint8List Dart):
I have this Uint8List and the values are: 31, 212, 243, 57, 0, 224, 7, 1, 6, 5, 9, 21, 0, 1, 0, 0, 0, 91, 228 I use the code below ByteData bytes = list.buffer.asByteData(); int offset = 1; double value = bytes.getFloat32(offset); and value that I expected should be something between 50 and 150 More info on what I am doing can be found here: bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/… name="SpO2PR-Spot-Check - SpO2"
This property is of type SFLOAT, which according to https://www.bluetooth.com/specifications/assigned-numbers/format-types/ looks like this:
0x16 SFLOAT IEEE-11073 16-bit SFLOAT
As Dart does not seem to have an easy way to get that format, you might have to create a parser yourself using raw bytes.
These might be helpful:
https://stackoverflow.com/a/51391743/6413439
https://stackoverflow.com/a/16474957/6413439
Here is something that I used to convert sfloat to double in dart for our flutter app.
double sfloat2double(ieee11073) {
var reservedValues = {
0x07FE: 'PositiveInfinity',
0x07FF: 'NaN',
0x0800: 'NaN',
0x0801: 'NaN',
0x0802: 'NegativeInfinity'
};
var mantissa = ieee11073 & 0x0FFF;
if (reservedValues.containsKey(mantissa)){
return 0.0; // basically error
}
if ((ieee11073 & 0x0800) != 0){
mantissa = -((ieee11073 & 0x0FFF) + 1 );
}else{
mantissa = (ieee11073 & 0x0FFF);
}
var exponent = ieee11073 >> 12;
if (((ieee11073 >> 12) & 0x8) != 0){
exponent = -((~(ieee11073 >> 12) & 0x0F) + 1 );
}else{
exponent = ((ieee11073 >> 12) & 0x0F);
}
var magnitude = pow(10, exponent);
return (mantissa * magnitude);
}

Calling InterlockedAdd on RWByteAddressBuffer multiple times gives unexpected results (on NVidia)

I was looking to move back from using counter buffer for some compute shader routines, and had some unexpected behaviour on Nvidia cards
I made a really simplified example (so it does not make sense to do that, but that's the smallest that can reproduce the issue I encounter).
So I want to perform conditional writes in several locations on a buffer (also for simplification, I only run a single thread, since the behaviour can also be reproduced that way).
I will write 4 uints, then 2 uint3 (using InterlockedAdd to "simulate conditional writes")
So I use a single buffer (with raw access on uav), with the following simple layout :
0 -> First counter
4 -> Second counter
8 till 24 -> First 4 ints to write
24 till 48 -> Pair of uint3 to write
I also clear the buffer every frame (0 for each counter, and arbitrary value for the rest, 12345 in this case).
I copy the buffer staging resource in order to check the values, so yes my pipeline binding is correct, but I can post the code if asked for.
Now when I call the compute shader, only performing 4 increments as here :
RWByteAddressBuffer RWByteBuffer : BACKBUFFER;
#define COUNTER0_LOCATION 0
#define COUNTER1_LOCATION 4
#define PASS1_LOCATION 8
#define PASS2_LOCATION 24
[numthreads(1,1,1)]
void CS(uint3 tid : SV_DispatchThreadID)
{
uint i0,i1,i2,i3;
RWByteBuffer.InterlockedAdd(COUNTER0_LOCATION, 1, i0);
RWByteBuffer.Store(PASS1_LOCATION + i0 * 4, 10);
RWByteBuffer.InterlockedAdd(COUNTER0_LOCATION, 1, i1);
RWByteBuffer.Store(PASS1_LOCATION + i1 * 4, 20);
RWByteBuffer.InterlockedAdd(COUNTER0_LOCATION, 1, i2);
RWByteBuffer.Store(PASS1_LOCATION + i2 * 4, 30);
RWByteBuffer.InterlockedAdd(COUNTER0_LOCATION, 1, i3);
RWByteBuffer.Store(PASS1_LOCATION + i3 * 4, 40);
}
I then obtain the following results (formatted a little):
4,0,
10,20,30,40,
12345,12345,12345,12345,12345,12345,12345,12345,12345
Which is correct (counter is 4 as I called 4 times, second one was not called), I get 10 till 40 in the right locations, and rest has default values
Now if I want to reuse those indices in order to write them to another location:
[numthreads(1,1,1)]
void CS(uint3 tid : SV_DispatchThreadID)
{
uint i0,i1,i2,i3;
RWByteBuffer.InterlockedAdd(COUNTER0_LOCATION, 1, i0);
RWByteBuffer.Store(PASS1_LOCATION + i0 * 4, 10);
RWByteBuffer.InterlockedAdd(COUNTER0_LOCATION, 1, i1);
RWByteBuffer.Store(PASS1_LOCATION + i1 * 4, 20);
RWByteBuffer.InterlockedAdd(COUNTER0_LOCATION, 1, i2);
RWByteBuffer.Store(PASS1_LOCATION + i2 * 4, 30);
RWByteBuffer.InterlockedAdd(COUNTER0_LOCATION, 1, i3);
RWByteBuffer.Store(PASS1_LOCATION + i3 * 4, 40);
uint3 inds = uint3(i0, i1, i2);
uint3 inds2 = uint3(i1,i2,i3);
uint writeIndex;
RWByteBuffer.InterlockedAdd(COUNTER1_LOCATION, 1, writeIndex);
RWByteBuffer.Store3(PASS2_LOCATION + writeIndex * 12, inds);
RWByteBuffer.InterlockedAdd(COUNTER1_LOCATION, 1, writeIndex);
RWByteBuffer.Store3(PASS2_LOCATION + writeIndex * 12, inds2);
}
Now If I run that code on Intel card (tried HD4000 and HD4600), or ATI card 290, I get expected results eg :
4,2,
10,20,30,40,
0,1,2,1,2,3
But running that on NVidia (used 970m, gtx1080, gtx570) , I get the following :
4,2,
40,12345,12345,12345,
0,0,0,0,0,0
So it seems it suddenly returns 0 in the return value of interlocked add (it still increments properly as counter is 4, but we end up with 40 in last value.
Also we can see that only 0 got written in i1,i2,i3
In case I "reserve memory", eg, call Interlocked only once per location (incrementing by 4 and 2 , respectively):
[numthreads(1,1,1)]
void CSB(uint3 tid : SV_DispatchThreadID)
{
uint i0;
RWByteBuffer.InterlockedAdd(COUNTER0_LOCATION, 4, i0);
uint i1 = i0 + 1;
uint i2 = i0 + 2;
uint i3 = i0 + 3;
RWByteBuffer.Store(PASS1_LOCATION + i0 * 4, 10);
RWByteBuffer.Store(PASS1_LOCATION + i1 * 4, 20);
RWByteBuffer.Store(PASS1_LOCATION + i2 * 4, 30);
RWByteBuffer.Store(PASS1_LOCATION + i3 * 4, 40);
uint3 inds = uint3(i0, i1, i2);
uint3 inds2 = uint3(i1,i2,i3);
uint writeIndex;
RWByteBuffer.InterlockedAdd(COUNTER1_LOCATION, 2, writeIndex);
uint writeIndex2 = writeIndex + 1;
RWByteBuffer.Store3(PASS2_LOCATION + writeIndex * 12, inds);
RWByteBuffer.Store3(PASS2_LOCATION + writeIndex2 * 12, inds2);
}
Then this works on all cards, but I have some cases when I have to rely on the earlier behaviour.
As a side note, if I use structured buffers with a counter flag on the uav instead of a location in a byte address and do :
RWStructuredBuffer<uint> rwCounterBuffer1;
RWStructuredBuffer<uint> rwCounterBuffer2;
RWByteAddressBuffer RWByteBuffer : BACKBUFFER;
#define PASS1_LOCATION 8
#define PASS2_LOCATION 24
[numthreads(1,1,1)]
void CS(uint3 tid : SV_DispatchThreadID)
{
uint i0 = rwCounterBuffer1.IncrementCounter();
uint i1 = rwCounterBuffer1.IncrementCounter();
uint i2 = rwCounterBuffer1.IncrementCounter();
uint i3 = rwCounterBuffer1.IncrementCounter();
RWByteBuffer.Store(PASS1_LOCATION + i0 * 4, 10);
RWByteBuffer.Store(PASS1_LOCATION + i1 * 4, 20);
RWByteBuffer.Store(PASS1_LOCATION + i2 * 4, 30);
RWByteBuffer.Store(PASS1_LOCATION + i3 * 4, 40);
uint3 inds = uint3(i0, i1, i2);
uint3 inds2 = uint3(i1,i2,i3);
uint writeIndex1= rwCounterBuffer2.IncrementCounter();
uint writeIndex2= rwCounterBuffer2.IncrementCounter();
RWByteBuffer.Store3(PASS2_LOCATION + writeIndex1* 12, inds);
RWByteBuffer.Store3(PASS2_LOCATION + writeIndex2* 12, inds2);
}
This works correctly across all cards, but has all sorts of issues (that are out of topic for this question).
This is running on DirectX11 (I did not try it on DX12, and that's not relevant to my use case, except plain curiosity)
So is it a bug on NVidia?
Or is there something wrong with the first approach?

backpropagation algorithm in matlab

I'm writing a back propagation algorithm in matlab. But I can not get to write a good solution. I read a book Haykin and read some topics in Internet, how make it other people. I understand from door to door this algorithm in theory, but I have a much of error in practice. I have a NaN in my code.
You can see here.
I'm trying classification some points on plate. These are three ellipses, which are placed one inside the other.
I wrote this function. The second layer learn, but first layer dont learn.
function [E, W_1, W_2, B_1, B_2, X_3] = update(W_1, W_2, B_1, B_2, X_1, T, alpha)
V_1 = W_1 * X_1 + B_1;
X_2 = tansig(V_1);
V_2 = W_2 * X_2 + B_2;
X_3 = tansig(V_2);
E = 1 / 2 * sum((T - X_3) .^ 2);
dE = (T - X_3);
for j = 1 : size(X_2, 1)
delta_2_sum = 0;
for i = 1 : size(X_3, 1)
delta_2 = dE(i, 1) * dtansig(1, V_2(i, 1) );
W_2_tmp(i, j) = W_2(i, j) - alpha * delta_2 * X_2(j, 1);
B_2_tmp(i, 1) = B_2(i, 1) - alpha * delta_2;
end;
end;
for k = 1 : size(X_1, 1)
for j = 1 : size(X_2, 1)
delta_2_sum = 0;
for i = 1 : size(X_3, 1)
delta_2 = dE(i, 1) * dtansig(1, V_2(i, 1) );
delta_2_sum = delta_2_sum + W_2(i, j) * delta_2;
end;
delta_1 = delta_2_sum * dtansig(1, V_1(j, 1) );
W_1_tmp(j, k) = W_1(j, k) - alpha * delta_1 * X_1(k, 1);
B_1_tmp(j, 1) = B_1(j, 1) - alpha * delta_1;
end;
end;
if (min(W_1) < -10000 )
X = 1;
end;
B_1 = B_1_tmp;
B_2 = B_2_tmp;
W_1 = W_1_tmp
W_2 = W_2_tmp;
end
I wrote another variant code. And this code don't work. I calculated this code with 1-dimensional vector as input and as output. And I don't have truth result.
What can I do?
I use matlab nntool interface. But my backprop was written my hand.
How I can testing my code?
function [net] = backProp(net, epoch, alpha)
for u = 1 : epoch % Число эпох
for p = 1 : size(net.userdata{1, 1}, 2)
% Учим по всем элементам выборки
[~, ~, ~, De, Df, f] = frontProp(net, p, 1);
for l = size(net.LW, 1) : -1 : 1 % Обходим слои
if (size(net.LW, 1) == l )
delta{l} = De .* Df{l};
else
% size(delta{l + 1})
% size(net.LW{l + 1})
delta{l} = Df{l} .* (delta{l + 1}' * net.LW{l + 1} )';
end;
if (l == 1)
net.IW{l} + alpha * delta{l} * f{l}'
net.IW{l} = net.IW{l} + alpha * delta{l} * f{l}';
else
net.LW{l} + alpha * delta{l} * f{l}'
net.LW{l} = net.LW{l} + alpha * delta{l} * f{l}';
end;
end;
end;
end;
end

Binary clock with Lua, how to remove dots that aren't used?

I have the following binary clock that I grabbed from this wiki article (the one that's for v1.5.*) for the awesome WM:
binClock = wibox.widget.base.make_widget()
binClock.radius = 1.5
binClock.shift = 1.8
binClock.farShift = 2
binClock.border = 1
binClock.lineWidth = 1
binClock.colorActive = beautiful.bg_focus
binClock.fit = function(binClock, width, height)
local size = math.min(width, height)
return 6 * 2 * binClock.radius + 5 * binClock.shift + 2 * binClock.farShift + 2 * binClock.border + 2 * binClock.border, size
end
binClock.draw = function(binClock, wibox, cr, width, height)
local curTime = os.date("*t")
local column = {}
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.hour), 1, 1))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.hour), 2, 2))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.min), 1, 1))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.min), 2, 2))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.sec), 1, 1))))
table.insert(column, string.format("%04d", binClock:dec_bin(string.sub(string.format("%02d", curTime.sec), 2, 2))))
local bigColumn = 0
for i = 0, 5 do
if math.floor(i / 2) > bigColumn then
bigColumn = bigColumn + 1
end
for j = 0, 3 do
if string.sub(column[i + 1], j + 1, j + 1) == "0" then
active = false
else
active = true
end
binClock:draw_point(cr, bigColumn, i, j, active)
end
end
end
binClock.dec_bin = function(binClock, inNum)
inNum = tonumber(inNum)
local base, enum, outNum, rem = 2, "01", "", 0
while inNum > (base - 1) do
inNum, rem = math.floor(inNum / base), math.fmod(inNum, base)
outNum = string.sub(enum, rem + 1, rem + 1) .. outNum
end
outNum = inNum .. outNum
return outNum
end
binClock.draw_point = function(binClock, cr, bigColumn, column, row, active)
cr:arc(binClock.border + column * (2 * binClock.radius + binClock.shift) + bigColumn * binClock.farShift + binClock.radius,
binClock.border + row * (2 * binClock.radius + binClock.shift) + binClock.radius, 2, 0, 2 * math.pi)
if active then
cr:set_source_rgba(0, 0.5, 0, 1)
else
cr:set_source_rgba(0.5, 0.5, 0.5, 1)
end
cr:fill()
end
binClocktimer = timer { timeout = 1 }
binClocktimer:connect_signal("timeout", function() binClock:emit_signal("widget::updated") end)
binClocktimer:start()
First, if something isn't by default already in Lua that's because this is to be used in the config file for awesome. :)
OK, so what I need is some guidance actually. I am not very familiar with Lua currently, so some guidance is all I ask so I can learn. :)
OK, so first, this code outputs a normal binary clock, but every column has 4 dots (44,44,44), instead of a 23,34,34 setup for the dots, as it would be in a normal binary clock. What's controlling that in this code? So that I can pay around with it.
Next, what controls the color? Right now it's gray background and quite a dark green, I want to brighten both of those up.
And what controls the smoothing? Right now it's outputting circles, would like to see what it's like for it to output squares instead.
That's all I need help with, if you can point me to the code and some documentation for what I need, that should be more than enough. :)
Also, if somebody would be nice enough to add some comments, that also would be awesome. Don't have to be very detailed comments, but at least to the point where it gives an idea of what each thing does. :)
EDIT:
Found what modifies the colors, so figured that out. None of the first variables control if it's a square or circle BTW. :)
The draw_point function draws the dots.
The two loops in the draw function are what create the output and is where the columns come from. To do a 23/34/34 layout you would need to modify the inner loop skip the first X points based on the counter of the outer loop I believe.

Resources