Getting the class label using DynamicTimeWarping using Accord.NET - machine-learning

I'm developing a project in which I need to do gesture recognition.
After searching for a way to do this I came across dynamic time warping.
To try this idea, and since my project is in C#, I decided to try Accord.NET.
Before I tried this out in my project, I created a clean project and modified the example on Accord.NET's documentation, that can be found here:
http://accord-framework.net/docs/html/T_Accord_Statistics_Kernels_DynamicTimeWarping.htm
So, what I'm trying right now, is to train my algorithm with a set of learning data (composed of the gestures swipe right, swipe left and double tap) and then, use the same examples in the learning data to see if the algorithm is identifying the correct gesture. The values are just an exemple, not the real deal.
The documentation is not very clear on how to do this, since the Decide method used on the example only returns a boolean.
I've searched the documentation for a way to identify the correct class but to no avail.
The only thing I've found was the following line, but what it returns is an int value of 0 or 1, instead of a boolean:
var res1 = ((IClassifier)svm.ToMulticlass()).Decide(sequences[0]);
Can anyone point me in the right direction on how to correctly identify the gesture?
This is my first attempt at machine learning and Accord.NET, so this is all absolutelly new to me.
The example code can be found bellow.
namespace DynamicTimeWarpingExample
{
public class Program
{
public static void Main(string[] args)
{
double[][][] sequences =
{
new double[][] // Swipe left
{
new double[] { 1, 1, 1 },
new double[] { 1, 2, 1 },
new double[] { 1, 2, 2 },
new double[] { 2, 2, 2 },
},
new double[][] // Swipe right
{
new double[] { 1, 10, 6 },
new double[] { 1, 5, 6 },
new double[] { 6, 7, 1 },
},
new double[][] // Double tap
{
new double[] { 8, 2, 5 },
new double[] { 1, 50, 4 },
}
};
int[] outputs =
{
0, // Swipe left
1, // Swipe right
2 // Double tap
};
var smo = new SequentialMinimalOptimization<DynamicTimeWarping, double[][]>()
{
Complexity = 1.5,
Kernel = new DynamicTimeWarping(alpha: 1, degree: 1)
};
var svm = smo.Learn(sequences, outputs);
bool[] predicted = svm.Decide(sequences);
double error = new ZeroOneLoss(outputs).Loss(predicted); // error will be 0.0
var res1 = ((IClassifier<double[][], int>)svm.ToMulticlass()).Decide(sequences[0]); // returns 0
var res2 = ((IClassifier<double[][], int>)svm.ToMulticlass()).Decide(sequences[1]); // returns 1
var res3 = ((IClassifier<double[][], int>)svm.ToMulticlass()).Decide(sequences[2]); // returns 1
}
}
}
***************** New Version *****************
public static void Main(string[] args)
{
double[][][] sequences =
{
new double[][] // Swipe left
{
new double[] { 1, 1, 1 },
new double[] { 1, 2, 1 },
new double[] { 1, 2, 2 },
new double[] { 2, 2, 2 },
},
new double[][] // Swipe right
{
new double[] { 1, 10, 6 },
new double[] { 1, 5, 6 },
new double[] { 6, 7, 1 },
},
new double[][] // Double tap
{
new double[] { 8, 2, 5 },
new double[] { 1, 50, 4 },
}
};
int[] outputs =
{
0, // Swipe left
1, // Swipe right
2 // Double tap
};
var teacher = new MulticlassSupportVectorLearning<DynamicTimeWarping, double[][]>()
{
// Configure the learning algorithm to use SMO to train the
// underlying SVMs in each of the binary class subproblems.
Learner = (param) => new SequentialMinimalOptimization<DynamicTimeWarping, double[][]>
{
Complexity = 1.5,
Kernel = new DynamicTimeWarping(alpha: 1, degree: 1),
//UseKernelEstimation = true
}
};
// Learn a machine
var machine = teacher.Learn(sequences, outputs);
// Create the multi-class learning algorithm for the machine
var calibration = new MulticlassSupportVectorLearning<DynamicTimeWarping, double[][]>()
{
Model = machine, // We will start with an existing machine
// Configure the learning algorithm to use Platt's calibration
Learner = (param) => new ProbabilisticOutputCalibration<DynamicTimeWarping, double[][]>()
{
Model = param.Model // Start with an existing machine
}
};
// Configure parallel execution options
calibration.ParallelOptions.MaxDegreeOfParallelism = 1;
// Learn a machine
calibration.Learn(sequences, outputs);
double decision1, decision2, decision3, decision4, decision5, decision6;
var res1 = machine.Probability(sequences[0], out decision1); // decision 0 - Probability 0.78698604216159851 - Score 1
var res2 = machine.Probability(sequences[1], out decision2); // decision 1 - Probability 0.67246889837875257 - Score 1
var res3 = machine.Probability(sequences[2], out decision3); // decision 2 - Probability 0.78698604216159851 - Score 1
var newGesture1 = new double[][]
{
new double[] { 1, 1, 1 },
new double[] { 1, 2, 1 },
new double[] { 1, 2, 2 },
new double[] { 2, 2, 2 },
};
var newGesture2 = new double[][]
{
new double[] { 1, 10, 6 },
new double[] { 1, 5, 6 },
new double[] { 6, 7, 1 },
};
var newGesture3 = new double[][]
{
new double[] { 8, 2, 5 },
new double[] { 1, 50, 4 },
};
var res5 = machine.Score(newGesture1, out decision5); // decision 0 - Probability 0.35577588944247057 - Score 0.051251948605637254
var res6 = machine.Score(newGesture2, out decision6); // decision 1 - Probability 0.40733908994050544 - Score 0.19912250476931792
var res4 = machine.Score(newGesture3, out decision4); // decision 2 - Probability 0.71853321355842836 - Score 0.816934034911964
}

The problem is that you are creating a binary classifier for a problem that actually involves multiple classes.
In your case, instead of doing:
var smo = new SequentialMinimalOptimization<DynamicTimeWarping, double[][]>()
{
Complexity = 1.5,
Kernel = new DynamicTimeWarping(alpha: 1, degree: 1)
};
var svm = smo.Learn(sequences, outputs);
You would want to wrap this binary learning problem into a multi-class learning using
// Create the multi-class learning algorithm for the machine
var teacher = new MulticlassSupportVectorLearning<DynamicTimeWarping, double[][]>()
{
// Configure the learning algorithm to use SMO to train the
// underlying SVMs in each of the binary class subproblems.
Learner = (param) => new SequentialMinimalOptimization<DynamicTimeWarping, double[][]>
{
Complexity = 1.5,
Kernel = new DynamicTimeWarping(alpha: 1, degree: 1)
};
}
// Learn a machine
var svm = teacher.Learn(inputs, outputs);

Related

tensorflow.js model does not learn

My model doesn´t learn.. It is supposed to do a softmax calculation in the end. I want as a result a classification (quit or no-quit). The model should predict if the customer will quit. I am giving the quit-column as label and have 196 input-features.
My visor says there is no learning at all. But then I am not certain, how the visor will get information, if my model learns. I am very new to javascript and would appreciate any help.
ngOnInit() {
this.train();
}
async train(): Promise<any> {
const csvUrl = '/assets/little.csv';
const csvDataset = tf.data.csv(
csvUrl,
{
columnConfigs: {
quit: {
isLabel: true
}
},
delimiter:','
});
const numOfFeatures = (await csvDataset.columnNames()).length -1;
console.log(numOfFeatures);
const flattenedDataset =
csvDataset
.map(({xs, ys}: any) =>
{
// Convert xs(features) and ys(labels) from object form (keyed by
// column name) to array form.
return {xs:Object.values(xs), ys:Object.values(ys)};
}).batch(10);
console.log(flattenedDataset.toArray());
const model = tf.sequential({
layers: [
tf.layers.dense({inputShape: [196], units: 100, activation: 'relu'}),
tf.layers.dense({units: 100, activation: 'relu'}),
tf.layers.dense({units: 100, activation: 'relu'}),
tf.layers.dense({units: 1, activation: 'softmax'}),
]
});
await trainModel(model, flattenedDataset);
const surface = { name: 'Model Summary', tab: 'Model Inspection'};
tfvis.show.modelSummary(surface, model);
console.log('Done Training');
}
async function trainModel(model, flattenedDataset) {
// Prepare the model for training.
model.compile({
optimizer: tf.train.adam(),
loss: tf.losses.sigmoidCrossEntropy,
metrics: ['accuracy']
});
const batchSize = 32;
const epochs = 50;
return await model.fitDataset(flattenedDataset, {
batchSize,
epochs,
shuffle: true,
callbacks: tfvis.show.fitCallbacks(
{ name: 'Training Performance' },
['loss'],
{ height: 200, callbacks: ['onEpochEnd'] }
)
});
}
The number of units for the last layer is the number of categories. There are two categories quit and no-quit. Additionnaly your labels should be one-hot encoded. More general answers on why a model is not learning can be found here.

ol3 ext-ol how can make cluster for different layers

I'm using ol3/ol4 with ol-ext
I create two layer:
clusterSource = new ol.source.Cluster({
distance: distanceFt,
source: new ol.source.Vector()
});
// Animated cluster layer
clusterLayer = new ol.layer.AnimatedCluster({
name: 'Cluster',
source: clusterSource,
animationDuration: 700, //$("#animatecluster").prop('checked') ? 700 : 0,
// Cluster style
style: getStyle
});
layersArray.push(clusterLayer); // adding to array
sourceReclamos_Eventos = new ol.source.Cluster({
distance: distanceFt,
source: new ol.source.Vector()
});
capaReclamos_Eventos = new ol.layer.AnimatedCluster({
name: "Reclamos_Eventos",
source: sourceReclamos_Eventos,
animationDuration: 700,
style: getStyle
});
layersArray.push(capaReclamos_Eventos);
Later, add that layers in:
selectCluster = new ol.interaction.SelectCluster({
layers: arraySelectCLuster,
// Point radius: to calculate distance between the features
pointRadius: 20,
animate: true, //$("#animatesel").prop('checked'),
// Feature style when it springs apart
featureStyle: featureStyle,
selectCluster: false, // disable cluster selection
});
After load the layers, only persist the Features in the first layer, in the second layer the Features is removed (clear) after zoom changing... why?
please, help
EDIT
I'm adding features using clusterLayer.getSource().addFeatures() and capaReclamos_Eventos.getSource().addFeatures().
function addFeatures_Reclamos_Eventos(ffs, centrar) {
var transform = ol.proj.getTransform('EPSG:4326', 'EPSG:3857');
var features = [];
for (var i = 0; i < ffs.length; i++) {
features[i] = new ol.Feature();
features[i].setProperties(ffs[i]);
var geometry = new ol.geom.Point(transform([parseFloat(ffs[i].lon), parseFloat(ffs[i].lat)]));
features[i].setGeometry(geometry);
}
qweFeature = features;
capaReclamos_Eventos.getSource().addFeatures(features);
removeloading('mapLoading');
if (document.getElementById('botonFiltrar')) {
document.getElementById('botonFiltrar').disabled = false;
}
if (centrar) {
window.setTimeout(function () {
var extent = capaReclamos_Eventos.getSource().getExtent();
map.getView().fit(extent, map.getSize());
}, 700);// 1/2 seg
}
}

Drag and Drop limitation in Konva js

I recently began to learn Konva-JS... please help me :)
<script>
var width = window.innerWidth;
var height = window.innerHeight;
function loadImages(sources, callback) {
var assetDir = '/assets/';
var images = {};
var loadedImages = 0;
var numImages = 0;
for(var src in sources) {
numImages++;
}
for(var src in sources) {
images[src] = new Image();
images[src].onload = function() {
if(++loadedImages >= numImages) {
callback(images);
}
};
images[src].src = assetDir + sources[src];
}
}
function isNearOutline(animal, outline) {
var a = animal;
var o = outline;
var ax = a.getX();
var ay = a.getY();
if(ax > o.x - 20 && ax < o.x + 20 && ay > o.y - 20 && ay < o.y + 20) {
return true;
}
else {
return false;
}
}
function drawBackground(background, beachImg, text) {
var context = background.getContext();
context.drawImage(beachImg, 0, 0);
context.setAttr('font', '20pt Calibri');
context.setAttr('textAlign', 'center');
context.setAttr('fillStyle', 'white');
context.fillText(text, background.getStage().getWidth() / 2, 40);
}
function initStage(images) {
var stage = new Konva.Stage({
container: 'container',
width: 578,
height: 530
});
var background = new Konva.Layer();
var animalLayer = new Konva.Layer();
var animalShapes = [];
var score = 0;
// image positions
var animals = {
snake: {
x: 10,
y: 70
},
giraffe: {
x: 90,
y: 70
},
monkey: {
x: 275,
y: 70
},
lion: {
x: 400,
y: 70
}
};
var outlines = {
snake_black: {
x: 275,
y: 350
},
giraffe_black: {
x: 390,
y: 250
},
monkey_black: {
x: 300,
y: 420
},
lion_black: {
x: 100,
y: 390
}
};
// create draggable animals
for(var key in animals) {
// anonymous function to induce scope
(function() {
var privKey = key;
var anim = animals[key];
var animal = new Konva.Image({
image: images[key],
x: anim.x,
y: anim.y,
draggable: true
});
animal.on('dragstart', function() {
this.moveToTop();
animalLayer.draw();
});
/*
* check if animal is in the right spot and
* snap into place if it is
*/
animal.on('dragend', function() {
var outline = outlines[privKey + '_black'];
if(!animal.inRightPlace && isNearOutline(animal, outline)) {
animal.position({
x : outline.x,
y : outline.y
});
animalLayer.draw();
animal.inRightPlace = true;
if(++score >= 4) {
var text = 'You win! Enjoy your booty!';
drawBackground(background, images.beach, text);
}
// disable drag and drop
setTimeout(function() {
animal.draggable(false);
}, 50);
}
});
// make animal glow on mouseover
animal.on('mouseover', function() {
animal.image(images[privKey + '_glow']);
animalLayer.draw();
document.body.style.cursor = 'pointer';
});
// return animal on mouseout
animal.on('mouseout', function() {
animal.image(images[privKey]);
animalLayer.draw();
document.body.style.cursor = 'default';
});
animal.on('dragmove', function() {
document.body.style.cursor = 'pointer';
});
animalLayer.add(animal);
animalShapes.push(animal);
})();
}
// create animal outlines
for(var key in outlines) {
// anonymous function to induce scope
(function() {
var imageObj = images[key];
var out = outlines[key];
var outline = new Konva.Image({
image: imageObj,
x: out.x,
y: out.y
});
animalLayer.add(outline);
})();
}
stage.add(background);
stage.add(animalLayer);
drawBackground(background, images.beach, 'Ahoy! Put the animals on the beach!');
}
var sources = {
beach: 'beach.png',
snake: 'snake.png',
snake_glow: 'snake-glow.png',
snake_black: 'snake-black.png',
lion: 'lion.png',
lion_glow: 'lion-glow.png',
lion_black: 'lion-black.png',
monkey: 'monkey.png',
monkey_glow: 'monkey-glow.png',
monkey_black: 'monkey-black.png',
giraffe: 'giraffe.png',
giraffe_glow: 'giraffe-glow.png',
giraffe_black: 'giraffe-black.png'
};
loadImages(sources, initStage);
</script>
as we can see in this example Animals_on_the_Beach_Game the animal's images are drag-able and can be drop ever where.... but I want to change it in the way that it just can drop on the specific place ... what can I do ?
thank you :)
This is more of a design question, as letting go of the mouse button isn't something you can prevent. It would also be non-intuitive to keep the image attached to the mouse position as you would then need a new mouse event to associate with dropping it. What I've done for a drag and drop UI was to either (1) destroy the dropped shape, or if that wasn't an option, (2) animate the shape back (i.e. snap back) to its original position. Alternatively, you might (3) find the closest likely valid drop target and snap to that location.
First you define lionOrigin, that maybe you already have.
You have to implement the call on the dragend event of the object dragged, so let's say the lion. You have to check position of the lion in relation to the end desired position, let's call it lionDestiny. That can be done with a simple grometry: calculate the distance between to point. We do that with distanceA2B() function.
Now you can establish an offset inside wich you can snap the object, as it is close enough. If the minimal offset is not achieved, then you place the lion back on lionOrigin.
Al last, in konvajs you can use .x() and .y() to easily get or set position to lion.
Something like this:
var lionOrigin = [50,50];
var lionDestiny = [200,200];
var offset = 20;
distanceA2B(a,b) {
return Math.sqrt( ((a[0]-b[0])*(a[0]-b[0])) + ((a[1]-b[1])*(a[1]-b[1])) );
}
lion.on('dragend', (e) => {
var d = distanceA2B([lion.x(),lion.y()],lionDestiny);
if(d<offset){
lion.x(lionDestiny[0]);
lion.y(lionDestiny[1]);
}else{
lion.x(lionOrigin[0]);
lion.y(lionOrigin[1]);
}
});
Hope this helps!
It would have been better if you could explain your question more when you say you want to move any shape to a specific position. Though konva.js provides you with various events through which you can do this. For example, suppose you want to interchange the location of two shapes when you drag and move the first shape to the second and drop it there. In this case, you can use dragend event of konva. So when you move the target element to another element and drop it there, check if they are intersecting each other or not and then interchange their coordinates with each other.
Here is the function to find the intersection between two elements:
haveIntersection(r1, r2) {
return !(
r2.x > r1.x + r1.width ||
r2.x + r2.width < r1.x ||
r2.y > r1.y + r1.height ||
r2.y + r2.height < r1.y
);
}
And from here, you can try to understand the functionality. Though it's in nuxt.js but the events and scripts would be almost same if you are using only javascript. You can find sample code with an explanation for replacing the location of two shapes with each other. So even if you don't want to replace the locations but you want to move your target element to any position this will make you understand how to do this.

How to animate a line string between 2 points in OpenLayers 3 map?

I want to draw a line between multiple points from an array of coordinates.
My code looks like :
<button onclick="drawAnimatedLine(new ol.geom.Point(6210355.674114,2592743.9994331785), new ol.geom.Point(8176927.537835015,2255198.08252584), 50, 2000);">Draw Line</button>
And my js looks like :
function drawAnimatedLine(startPt, endPt, steps, time, fn) {
var style = {
strokeColor: "#0500bd",
strokeWidth: 15,
strokeOpacity: 0.5,
strokeColor: '#0000ff'
};
var directionX = (endPt.x - startPt.x) / steps;
var directionY = (endPt.y - startPt.y) / steps;
var i = 0;
var prevLayer;
var lineDraw = setInterval(function () {
console.log("Inside Animate Line");
if (i > steps) {
clearInterval(lineDraw);
if (fn)
fn();
return;
}
var newEndPt = new ol.geom.Point(startPt.x + i * directionX, startPt.y + i * directionY);
var line = new ol.geom.LineString([startPt, newEndPt]);
var fea = new ol.Feature({
geometry:line,
style: style
});
var vec = new ol.layer.Vector();
vec.addFeatures([fea]);
map.addLayer(vec);
if(prevLayer)
{
map.removeLayer(prevLayer);
}
prevLayer = vec;
i++;
}, time / steps);
}
Note : Coordinates will be dynamic but for testing I've passed the sample data in onclick of the button. Please do try to sort out this issue as soon as possible.

JavaCV FaceRecognizer predict not working

I'm using opencv 2.4.6 and javacv 0.6. I'm trying to make face recognizer.
This is my code:
FaceRecognizer ef = createEigenFaceRecognizer(1, 0.00000001);
int facewidth = 92, faceheight = 112;
private boolean stopRec = false;
List<String> names = new ArrayList<String>();
public void recognize(IplImage face) {
int predicted;
int [] tabPredicted = new int[2];
double[] predConfTab = new double[2];
IplImage resizedFace = IplImage.create(new CvSize(facewidth, faceheight), IPL_DEPTH_8U, 1);
cvResize(face, resizedFace);
if (names.size() != 0)
{
ef.predict(resizedFace, tabPredicted, predConfTab);
predicted = tabPredicted[0];
}
else
{
predicted = -1;
}
if(predicted == -1 )
{
//adding user like:
int i = names.size();
names.add(name);
System.out.println("Identified new person: " + names.get(i));
MatVector mv = new MatVector(1);
mv.put(0, resizedFace);
int[] u = new int[] {i};
ef.train(mv, u);
}
I tried lot of configurations. I'm sure that i have valid face image in grayscale. The problem is that after ef.predict(resizedFace, tabPredicted, predConfTab);
tabPredicted[0] is always index of last added user and predConfTab[0] always equals 0, so it means that any faces exacly matches the last one added.

Resources