How to apply an updater to a Line connected to a function graph - manim

Here's my code
class Formula2(GraphScene):
CONFIG = {
"x_min" : 0,
"x_max" : 10.3,
"x_tick_frequency": 1,
"y_min" : 0,
"y_max" : 10.3,
"y_tick_frequency": 1,
"graph_origin" : [-4,-3,0] ,
"function_color" : RED ,
"axes_color" : WHITE ,
"x_labeled_nums" : range(1,10,1),
"y_labeled_nums" : range(1,10,1)
}
def construct(self):
self.setup_axes(animate=True)
func_graph = self.get_graph(self.func_to_graph, self.function_color)
vert_line = Line(start=self.coords_to_point(1,0), color=YELLOW)
vert_line.add_updater(lambda d:d.put_start_and_end_on(vert_line.start, self.input_to_graph_point(self.point_to_coords(vert_line.start)[0], func_graph)))
self.play(ShowCreation(func_graph))
self.play(ShowCreation(vert_line))
self.play(ApplyMethod(vert_line.shift, RIGHT))
def func_to_graph(self,x):
return x
And the updater doesn't work. I want the start of the line be on the x-axis and its end on the function graph. What's the problem with my code?

class Formula2(GraphScene):
CONFIG = {
"x_min" : 0,
"x_max" : 10.3,
"x_tick_frequency": 1,
"y_min" : 0,
"y_max" : 10.3,
"y_tick_frequency": 1,
"graph_origin" : [-4,-3,0] ,
"function_color" : RED ,
"axes_color" : WHITE ,
"x_labeled_nums" : range(1,10,1),
"y_labeled_nums" : range(1,10,1)
}
def construct(self):
self.setup_axes(animate=False)
func_graph = self.get_graph(self.func_to_graph, self.function_color)
vert_line = Line(start=self.coords_to_point(1,0), color=YELLOW)
vert_line.add_updater(
lambda mob: mob.put_start_and_end_on(
vert_line.get_start(),
func_graph.get_end()
)
)
self.play(ShowCreation(func_graph))
# add the vert_line because it have a updater method
self.add(vert_line)
self.play(ShowCreation(vert_line))
self.play(ApplyMethod(vert_line.shift, RIGHT))
def func_to_graph(self,x):
return x

Related

How can I get the coordinates of some point on an already-settled Graph?

In manim, I want to get the coordinates of a certain point on a graph?
class Formula2(GraphScene):
CONFIG = {
"x_min" : 0,
"x_max" : 100.3,
"x_tick_frequency": 10,
"y_min" : 0,
"y_max" : 100.3,
"y_tick_frequency": 10,
"graph_origin" : [-4,-3,0] ,
"function_color" : RED ,
"axes_color" : WHITE ,
"x_labeled_nums" : range(10,100,10),
"y_labeled_nums" : range(10,100,10)
}
def construct(self):
self.setup_axes(animate=True)```
What's my next method or function?
You can use the method coords_to_point, more details in my video

ANN regression, linear function approximation

I have built a regular ANN–BP setup with one unit on input and output layer and 4 nodes in hidden with sigmoid. Giving it a simple task to approximate linear f(n) = n with n in range 0-100.
PROBLEM: Regardless of number of layers, units in hidden layer or whether or not I am using bias in node values it learns to approximate f(n) = Average(dataset) like so:
Code is written in JavaScript as a proof of concept. I have defined three classes: Net, Layer and Connection, where Layer is an array of input, bias and output values, Connection is a 2D array of weights and delta weights. Here is the Layer code where all important calculations happen:
Ann.Layer = function(nId, oNet, oConfig, bUseBias, aInitBiases) {
var _oThis = this;
var _initialize = function() {
_oThis.id = nId;
_oThis.length = oConfig.nodes;
_oThis.outputs = new Array(oConfig.nodes);
_oThis.inputs = new Array(oConfig.nodes);
_oThis.gradients = new Array(oConfig.nodes);
_oThis.biases = new Array(oConfig.nodes);
_oThis.outputs.fill(0);
_oThis.inputs.fill(0);
_oThis.biases.fill(0);
if (bUseBias) {
for (var n=0; n<oConfig.nodes; n++) {
_oThis.biases[n] = Ann.random(aInitBiases[0], aInitBiases[1]);
}
}
};
/****************** PUBLIC ******************/
this.id;
this.length;
this.inputs;
this.outputs;
this.gradients;
this.biases;
this.next;
this.previous;
this.inConnection;
this.outConnection;
this.isInput = function() { return !this.previous; }
this.isOutput = function() { return !this.next; }
this.calculateGradients = function(aTarget) {
var n, n1, nOutputError,
fDerivative = Ann.Activation.Derivative[oConfig.activation];
if (this.isOutput()) {
for (n=0; n<oConfig.nodes; n++) {
nOutputError = this.outputs[n] - aTarget[n];
this.gradients[n] = nOutputError * fDerivative(this.outputs[n]);
}
} else {
for (n=0; n<oConfig.nodes; n++) {
nOutputError = 0.0;
for (n1=0; n1<this.outConnection.weights[n].length; n1++) {
nOutputError += this.outConnection.weights[n][n1] * this.next.gradients[n1];
}
// console.log(this.id, nOutputError, this.outputs[n], fDerivative(this.outputs[n]));
this.gradients[n] = nOutputError * fDerivative(this.outputs[n]);
}
}
}
this.updateInputWeights = function() {
if (!this.isInput()) {
var nY,
nX,
nOldDeltaWeight,
nNewDeltaWeight;
for (nX=0; nX<this.previous.length; nX++) {
for (nY=0; nY<this.length; nY++) {
nOldDeltaWeight = this.inConnection.deltaWeights[nX][nY];
nNewDeltaWeight =
- oNet.learningRate
* this.previous.outputs[nX]
* this.gradients[nY]
// Add momentum, a fraction of old delta weight
+ oNet.learningMomentum
* nOldDeltaWeight;
if (nNewDeltaWeight == 0 && nOldDeltaWeight != 0) {
console.log('Double overflow');
}
this.inConnection.deltaWeights[nX][nY] = nNewDeltaWeight;
this.inConnection.weights[nX][nY] += nNewDeltaWeight;
}
}
}
}
this.updateInputBiases = function() {
if (bUseBias && !this.isInput()) {
var n,
nNewDeltaBias;
for (n=0; n<this.length; n++) {
nNewDeltaBias =
- oNet.learningRate
* this.gradients[n];
this.biases[n] += nNewDeltaBias;
}
}
}
this.feedForward = function(a) {
var fActivation = Ann.Activation[oConfig.activation];
this.inputs = a;
if (this.isInput()) {
this.outputs = this.inputs;
} else {
for (var n=0; n<a.length; n++) {
this.outputs[n] = fActivation(a[n] + this.biases[n]);
}
}
if (!this.isOutput()) {
this.outConnection.feedForward(this.outputs);
}
}
_initialize();
}
The main feedForward and backProp functions are defined like so:
this.feedForward = function(a) {
this.layers[0].feedForward(a);
this.netError = 0;
}
this.backPropagate = function(aExample, aTarget) {
this.target = aTarget;
if (aExample.length != this.getInputCount()) { throw "Wrong input count in training data"; }
if (aTarget.length != this.getOutputCount()) { throw "Wrong output count in training data"; }
this.feedForward(aExample);
_calculateNetError(aTarget);
var oLayer = null,
nLast = this.layers.length-1,
n;
for (n=nLast; n>0; n--) {
if (n === nLast) {
this.layers[n].calculateGradients(aTarget);
} else {
this.layers[n].calculateGradients();
}
}
for (n=nLast; n>0; n--) {
this.layers[n].updateInputWeights();
this.layers[n].updateInputBiases();
}
}
Connection code is rather simple:
Ann.Connection = function(oNet, oConfig, aInitWeights) {
var _oThis = this;
var _initialize = function() {
var nX, nY, nIn, nOut;
_oThis.from = oNet.layers[oConfig.from];
_oThis.to = oNet.layers[oConfig.to];
nIn = _oThis.from.length;
nOut = _oThis.to.length;
_oThis.weights = new Array(nIn);
_oThis.deltaWeights = new Array(nIn);
for (nX=0; nX<nIn; nX++) {
_oThis.weights[nX] = new Array(nOut);
_oThis.deltaWeights[nX] = new Array(nOut);
_oThis.deltaWeights[nX].fill(0);
for (nY=0; nY<nOut; nY++) {
_oThis.weights[nX][nY] = Ann.random(aInitWeights[0], aInitWeights[1]);
}
}
};
/****************** PUBLIC ******************/
this.weights;
this.deltaWeights;
this.from;
this.to;
this.feedForward = function(a) {
var n, nX, nY, aOut = new Array(this.to.length);
for (nY=0; nY<this.to.length; nY++) {
n = 0;
for (nX=0; nX<this.from.length; nX++) {
n += a[nX] * this.weights[nX][nY];
}
aOut[nY] = n;
}
this.to.feedForward(aOut);
}
_initialize();
}
And my activation functions and derivatives are defined like so:
Ann.Activation = {
linear : function(n) { return n; },
sigma : function(n) { return 1.0 / (1.0 + Math.exp(-n)); },
tanh : function(n) { return Math.tanh(n); }
}
Ann.Activation.Derivative = {
linear : function(n) { return 1.0; },
sigma : function(n) { return n * (1.0 - n); },
tanh : function(n) { return 1.0 - n * n; }
}
And configuration JSON for the network is as follows:
var Config = {
id : "Config1",
learning_rate : 0.01,
learning_momentum : 0,
init_weight : [-1, 1],
init_bias : [-1, 1],
use_bias : false,
layers: [
{nodes : 1},
{nodes : 4, activation : "sigma"},
{nodes : 1, activation : "linear"}
],
connections: [
{from : 0, to : 1},
{from : 1, to : 2}
]
}
Perhaps, your experienced eye can spot the problem with my calculations?
See example in JSFiddle
I did not look extensively at the code (because it is a lot of code to look at, would need to take more time for that later, and I am not 100% familiar with javascript). Either way, I believe Stephen introduced some changes in how the weights are calculated, and his code seems to give correct results, so I'd recommend looking at that.
Here are a few points though that are not necessarily about the correctness of computations, but may still help:
How many examples are you showing the network for training? Are you showing the same input multiple times? You should show every example that you have (inputs) multiple times; showing every example only one time is not sufficient for algorithms based on gradient descent to learn, since they only move a little bit in the correct direction every time. It is possible that all of your code is correct, but you simply have to give it a bit more time to train.
Introducing more hidden layers like Stephen did may help to speed up training, or it may be detrimental. This is typically something you'd want to experiment with for your specific case. It definitely shouldn't be necessary for this simple problem though. I suspect a more important difference between your configuration and Stephen's configuration may be the activation function used in the hidden layer(s). You used a sigmoid, which means that all of the input values get squashed to lie below 1.0 in the hidden layer, and then you need to very large weights to transform these numbers back to the desired output (which can be up to a value of 100). Stephen used linear activation functions for all layers, which in this specific case is likely to make training much easier because you are actually trying to learn a linear function. In many other cases it would be desirable to introduce non-linearities though.
It may be beneficial to transform (normalize) both your input and your desired output to lie in [0, 1] instead of [0, 100]. This would make it more likely for your sigmoid layer to produce good results (though I'm still not sure if it would be enough, because you're still introducing a nonlinearity in a case where you intend to learn a linear function, and you may need more hidden nodes to correct for that). In ''real-world'' cases, where you have multiple different input variables, this is also typically done, because it ensures that all input variables are treated as being equally important initially. You could always do a preprocessing step where you normalize the input to [0, 1], give that as input to the network, train it to produce output in [0, 1], and then add a postprocessing step where you transform the output back to the original range.
First... I really like this code. I know very little about NNs (just getting started) so pardon my lacking here if any.
Here is a summary of the changes I made:
//updateInputWeights has this in the middle now:
nNewDeltaWeight =
oNet.learningRate
* this.gradients[nY]
/ this.previous.outputs[nX]
// Add momentum, a fraction of old delta weight
+ oNet.learningMomentum
* nOldDeltaWeight;
//updateInputWeights has this at the bottom now:
this.inConnection.deltaWeights[nX][nY] += nNewDeltaWeight; // += added
this.inConnection.weights[nX][nY] += nNewDeltaWeight;
// I modified the following:
_calculateNetError2 = function(aTarget) {
var oOutputLayer = _oThis.getOutputLayer(),
nOutputCount = oOutputLayer.length,
nError = 0.0,
nDelta = 0.0,
n;
for (n=0; n<nOutputCount; n++) {
nDelta = aTarget[n] - oOutputLayer.outputs[n];
nError += nDelta;
}
_oThis.netError = nError;
};
The config section looks like this now:
var Config = {
id : "Config1",
learning_rate : 0.001,
learning_momentum : 0.001,
init_weight : [-1.0, 1.0],
init_bias : [-1.0, 1.0],
use_bias : false,
/*
layers: [
{nodes : 1, activation : "linear"},
{nodes : 5, activation : "linear"},
{nodes : 1, activation : "linear"}
],
connections: [
{from : 0, to : 1}
,{from : 1, to : 2}
]
*/
layers: [
{nodes : 1, activation : "linear"},
{nodes : 2, activation : "linear"},
{nodes : 2, activation : "linear"},
{nodes : 2, activation : "linear"},
{nodes : 2, activation : "linear"},
{nodes : 1, activation : "linear"}
],
connections: [
{from : 0, to : 1}
,{from : 1, to : 2}
,{from : 2, to : 3}
,{from : 3, to : 4}
,{from : 4, to : 5}
]
}

How do I refer to a register in a foreach loop in LLVM?

I'm currently trying to define registers of architecture I work with via TableGen. There're supposed to be 2 computation blocks XR and YR and a pseudoblock XYR referring to them. For example XYR3 is a vector pseudoregister embracing X3 and Y3.
// Classes for registers of my namespace.
class TigerSHARCReg<bits<5> num, string n, list<string> altNames = []> :
Register<n, altNames>
{
field bits<5> Num = num;
let Namespace = "TigerSHARC";
}
class TigerSHARCVReg<bits<5> num, string n, list<TigerSHARCReg> subregs, list<SubRegIndex> indices = []> :
RegisterWithSubRegs<n, subregs>
{
field bits<5> Num = num;
let Namespace = "TigerSHARC";
let SubRegIndices = indices;
}
class TigerSHARCSubRegIndex<int size, int offset> : SubRegIndex<size, offset>
{
let Namespace = "TigerSHARC";
}
// === === ===
// XR registers and XR register class
foreach num = 0-31 in
def XR#num : TigerSHARCReg<num, "XR"#num>;
def XR : RegisterClass<"TigerSHARC", [i32, f32], 32,
(sequence "XR%u", 0, 31)>;
// YR registers and YR register class
foreach num = 0-31 in
def YR#n : TigerSHARCReg<num, "YR"#num>;
def YR : RegisterClass<"TigerSHARC", [i32, f32], 32,
(sequence "YR%u", 0, 31)>;
// There only two subregisters in each XYR
def XYRsub0 : TigerSHARCSubRegIndex<1, 0>;
def XYRsub1 : TigerSHARCSubRegIndex<1, 0>;
// XYR registers and XYR register class
foreach num = 0-31 in
def XYR#num : TigerSHARCVReg<0, "XYR0", [XR#num, YR#num], [XYRsub0, XYRsub1]>;
def XYR : RegisterClass<"TigerSHARC", [v2i32], 32, (sequence "XYR%u", 0, 31)>;
The problem is in theese lines:
foreach num = 0-31 in
def XYR#num : TigerSHARCVReg<0, "XYR0", [XR#num, YR#num], [XYRsub0, XYRsub1]>;
"#" concats only strings so [XR#num, YR#num] is incorrect notation. I've tried XR[num] but it doesn't seem to work either.
Is there a way to refer to an existing register in a loop?
Also, am I even doing it right?
Looks like that instead of [XR#num, YR#num] one should use [!cast< MyTypeReg >("XR"#n), !cast< MyTypeReg >("YR"#n)]. !cast(a) looks in a symbol table string a.

Titanium: several videoplayers on one view

I'm trying to create several video players in the titanium tableview (something like video gallery), but as i'm adding the video players in the loop, i get only the last in the loop video added.
Consider yGrid = 2, xGrid = 3, i have 6.mp4 only added.
for (var y = 0; y < yGrid; y++) {
var thisRow = Ti.UI.createTableViewRow({
className : "grid",
layout : "horizontal",
height : cellHeight + (2 * ySpacer),
backgroundColor : '#cfffffff',
selectionStyle: 'NONE',
backgroundImage : '/images/backg2.png'
});
var thisPlayer = [];
for (var x = 0; x < xGrid; x++) {
thisPlayer[x] = Titanium.Media.createVideoPlayer({
objName : "video-view",
objIndex : cellIndex.toString(),
left : ySpacer,
height : cellHeight,
width : cellWidth,
url: '/video/'+cellIndex.toString()+'.mp4',
mediaControlStyle : Titanium.Media.VIDEO_CONTROL_DEFAULT,
scalingMode : Titanium.Media.VIDEO_SCALING_MODE_SIZE,
zIndex : 10,
autoplay : false
});
thisRow.add(thisPlayer[x]);
cellIndex++;
}
tableData.push(thisRow);
}
var tableview = Ti.UI.createTableView({
left : 0,
top : App.geometry.menuHeight + App.geometry.lineHeight+5,
bottom : App.geometry.menuHeight + App.geometry.lineHeight,
width : '100%',
backgroundImage : '/images/backg2.png',
data : tableData
});
view.add(tableview);
Where is the issue?
Finally the issue was because IOS doesn't support several videoplayers in one time.
i've solved it like this
for (var x = 0; x < xGrid; x++) {
thisPlayer[x] = Ti.UI.createWebView({
objName : "video-view",
objIndex : cellIndex.toString(),
left : ySpacer,
height : cellHeight,
width : cellWidth,
});
thisPlayer[x].setHtml('<div><video width="100%" height="94%" controls><source src="./video/'+cellIndex.toString()+'.mp4" type="video/mp4"></video></div>');
thisRow.add(thisPlayer[x]);
cellIndex++;
}

Restrict number of characters in JqxGrid

I have a jqxGrid as below and i want to restrict number of characters in jqxGrid.
columns : [ {
text :
‘Type’,datafield : ‘type’, width : 150, align : ‘center’,cellsalign : ‘left’, editable : false
}, {
text :
‘Phase’,datafield : ‘phase’, width : 150, align : ‘center’,cellsalign : ‘left’, editable : false
},{
text :
‘Phase Description’,datafield : ‘phaseDescription’, width : 150, align : ‘center’,cellsalign : ‘left’, editable : false
},{
text :
‘Custom Phase’, datafield : ‘customPhase’, width : 150, align : ‘center’, cellsalign : ‘left’
}
for the column ‘Custom Phase’ i need to restrict user entry to 10 characters. How to achieve it?
For this, you have to use jqwidget validation and include the file jqxvalidator.js in your view file and use this code in column:
columns : [ {
text :
‘Type’,datafield : ‘type’, width : 150, align : ‘center’,cellsalign : ‘left’, editable : false
}, {
text :
‘Phase’,datafield : ‘phase’, width : 150, align : ‘center’,cellsalign : ‘left’, editable : false
},{
text :
‘Phase Description’,datafield : ‘phaseDescription’, width : 150, align : ‘center’,cellsalign : ‘left’, editable : false
},{
text :
‘Custom Phase’, datafield : ‘customPhase’, width : 150, align : ‘center’, cellsalign : ‘left’,
validation: function (cell, value)
{
if (value.length > 10) {
return { result: false, message: "character should be maximum 10" };
}
return true;
}
}
This demo use of the column's "validation" function: cellediting.htm.
validation: function(cell, value)
{
if (value.toString().length > 10)
{
return { result: false, message: "entered text should be less than 10 characters"}
}
return true;
}
toString() is required because the value could be Number or Date object.

Resources