A differentiable approach to counting elements in PyTorch - machine-learning

I need to count the number of times a certain element appear in a tensor in a differentiable way.
I have a tensor
a = torch.arange(10, dtype = float, requires_grad=True)
print(a)
>>>tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], dtype=torch.float64,
requires_grad=True)
Say I'm trying to count the number of times the element 5.0 appear. I found this SO question that is exactly the same, but the accepted answer is non differentiable:
(a == 5).sum()
>>>tensor(1)
(a == 5).sum().requires_grad
>>>False
My goal is to have a loss that enforces the element to appear N times:
loss = N - (a == 5).sum()

What you probably care about is differentiability wrt parameters, so your vector [1,2,3,4,5] is actually an output of f(x | theta). Sicne you casted everything onto integers, this will never create a meaningful gradient for theta, you have two paths:
Change your output, so that you do not output numbers, but rather distributions over number sequences, so instead of having a vector of integers, output a matrix of probabilities, N x K, where K is the maximum number and N number of integers, and an entry p_nk is a probability of nth number to be equal to k. Then, you can just write a nice smooth loss that will take expected number of each digit, lets call it Z (which is of length K) and then we can do
loss(P, Z) := - SUM_k [ || Z_k - [ SUM_n P_nk ] || ]
Treat the whole setup as RL problem, and then you do not need a "differentiable" loss. Just use a difference between expected occurences, and actual occurences as negative reward

Related

Ambiguity in recurrent neural network training in Julia Flux

I'm using Julia's Flux library to learn about neural networks. According to the documentation for train! (where train! takes arguments (loss, params, data, opt)):
For each datapoint d in data, compute the gradient of loss with respect to params through backpropagation and call the optimizer opt.
(see source for train!: https://github.com/FluxML/Flux.jl/blob/master/src/optimise/train.jl)
For a conventional NN based on Dense -- let's say with a one-dimensional input and output, i.e. with one feature -- this is easy to understand. Each element in data is a pair of single numbers, an independent sample of 1-d input/output values. train! does forward- and backpropagation on each pair of 1-d samples one at a time. In the process, the loss function is evaluated on each sample. (Do I have this right?)
My question is: how does this extend to a recurrent NN? Take the case of an RNN with 1-d (i.e. one feature) input and output. It seems like there's some ambiguity in how to structure the input and output data, and the results change based on the structure. As one example:
x = [[1], [2], [3]]
y = [4, 5, 6]
data = zip(x, y)
m = RNN(1, 1)
opt = Descent()
loss(x, y) = sum((Flux.stack(m.(x), 1) .- y) .^ 2)
train!(loss, params(m), data, opt)
(loss function taken from: https://github.com/FluxML/Flux.jl/blob/master/docs/src/models/recurrence.md)
In this example, when train! loops through each sample (for d in data), each value of d is a pair of single values from x and y, e.g. ([1], 4). loss is evaluated based on these single values. This is the same as in the Dense case.
On the other hand, consider:
x = [[[1], [2], [3]]]
y = [[4, 5, 6]]
m = RNN(1, 1)
opt = Descent()
loss(x, y) = sum((Flux.stack(m.(x), 1) .- y) .^ 2)
train!(loss, params(m), zip(x, y), opt)
Note that the only difference here is that x and y are nested in an extra pair of square brackets. As a result there's only one d in data, and it's a pair of sequences: ([[1], [2], [3]], [4, 5, 6]). loss can be evaluated on this version of d, and it returns a 1-d value, as required for training. But the value returned by loss is different than in any of the three results from the previous case, so the training process turns out differently.
The point is that both structures are valid in the sense that loss and train! handle them without error. Conceptually, I can make an argument for both structures being correct. But the results are different, and I assume that only one way is right. In other words, for training an RNN, should each d in data be a whole sequence, or a single element from a sequence?

Simple RNN example showing numerics

I'm trying to understand RNNs and I would like to find a simple example that actually shows the one hot vectors and the numerical operations. Preferably conceptual since actual code may make it even more confusing. Most examples I google just show boxes with loops coming out of them and its really difficult to understand what exactly is going on. In the rare case where they do show the vectors its still difficult to see how they are getting the values.
for example I don't know where the values are coming from in this picture https://i1.wp.com/karpathy.github.io/assets/rnn/charseq.jpeg
If the example could integrate LSTMs and other popular extensions that would be cool too.
In the simple RNN case, a network accepts an input sequence x and produces an output sequence y while a hidden sequence h stores the network's dynamic state, such that at timestep i: x(i) ∊ ℝM, h(i) ∊ ℝN, y(i) ∊ ℝP the real valued vectors of M/N/P dimensions corresponding to input, hidden and output values respectively. The RNN changes its state and omits output based on the state equations:
h(t) = tanh(Wxh ∗ [x(t); h(t-1)]), where Wxh a linear map: ℝM+N ↦ ℝN, * the matrix multiplication and ; the concatenation operation. Concretely, to obtain h(t) you concatenate x(t) with h(t-1), you apply matrix multiplication between Wxh (of shape (M+N, N)) and the concatenated vector (of shape M+N) , and you use a tanh non-linearity on each element of the resulting vector (of shape N).
y(t) = sigmoid(Why * h(t)), where Why a linear map: ℝN ↦ ℝP. Concretely, you apply matrix multiplication between Why (of shape (N, P)) and h(t) (of shape N) to obtain a P-dimensional output vector, on which the sigmoid function is applied.
In other words, obtaining the output at time t requires iterating through the above equations for i=0,1,...,t. Therefore, the hidden state acts as a finite memory for the system, allowing for context-dependent computation (i.e. h(t) fully depends on both the history of the computation and the current input, and so does y(t)).
In the case of gated RNNs (GRU or LSTM), the state equations get somewhat harder to follow, due to the gating mechanisms which essentially allow selection between the input and the memory, but the core concept remains the same.
Numeric Example
Let's follow your example; we have M = 4, N = 3, P = 4, so Wxh is of shape (7, 3) and Why of shape (3, 4). We of course do not know the values of either W matrix, so we cannot reproduce the same results; we can still follow the process though.
At timestep t<0, we have h(t) = [0, 0, 0].
At timestep t=0, we receive input x(0) = [1, 0, 0, 0]. Concatenating x(0) with h(0-), we get [x(t); h(t-1)] = [1, 0, 0 ..., 0] (let's call this vector u to ease notation). We apply u * Wxh (i.e. multiplying a 7-dimensional vector with a 7 by 3 matrix) and get a vector v = [v1, v2, v3], where vi = Σj uj Wji = u1 W1i + u2 W2i + ... + u7 W7i. Finally, we apply tanh on v, obtaining h(0) = [tanh(v1), tanh(v2), tanh(v3)] = [0.3, -0.1, 0.9]. From h(0) we can also get y(0) via the same process; multiply h(0) with Why (i.e. 3 dimensional vector with a 3 by 4 matrix), get a vector s = [s1, s2, s3, s4], apply sigmoid on s and get σ(s) = y(0).
At timestep t=1, we receive input x(1) = [0, 1, 0, 0]. We concatenate x(1) with h(0) to get a new u = [0, 1, 0, 0, 0.3, -0.1, 0.9]. u is again multiplied with Wxh, and tanh is again applied on the result, giving us h(1) = [1, 0.3, 1]. Similarly, h(1) is multiplied by Why, giving us a new s vector on which we apply the sigmoid to obtain σ(s) = y(1).
This process continues until the input sequence finishes, ending the computation.
Note: I have ignored bias terms in the above equations because they do not affect the core concept and they make notation impossible to follow

Create a List and Use it in Loss Function Tensorflow

I am trying to create a list based on my neural network outputs and use it in Tensorflow as a loss function.
Assume that results is list of size [1, batch_size] that is output by a neural network. I check to see whether the first value of this list is in a specific range passed in as a placeholder called valid_range, and if it is add 1 to a list. If it is not, add -1. The goal is to make all predictions of the network in the range, so the correct predictions is a tensor of all 1, which I call correct_predictions.
values_list = []
for j in range(batch_size):
a = results[0, j] >= valid_range[0]
b = result[0, j] <= valid_range[1]
c = tf.logical_and(a, b)
if (c == 1):
values_list.append(1)
else:
values_list.append(-1.)
values_list_tensor = tf.convert_to_tensor(values_list)
correct_predictions = tf.ones([batch_size, ], tf.float32)
Now, I want to use this as a loss function in my network, so that I can force all the predictions to be in the specified range. I try to train like this:
loss = tf.reduce_mean(tf.squared_difference(values_list_tensor, correct_predictions))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
gradients, variables = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, gradient_clip_threshold)
optimize = optimizer.apply_gradients(zip(gradients, variables))
This, however, has a problem and throws an error on the last optimize line, saying:
ValueError: No gradients provided for any variable: ['<tensorflow.python.training.optimizer._RefVariableProcessor object at 0x7f0245d4afd0>',
'<tensorflow.python.training.optimizer._RefVariableProcessor object at 0x7f0245d66050>'
...
I tried to debug this in Tensorboard, and I notice that the list I am creating does not appear in the graph, so basically the x part of the loss function is not part of the network itself. Is there some way to accurately create a list based on the predictions of a neural network and use it in the loss function in Tensorflow to train the network?
Please help, I have been stuck on this for a few days now.
Edit:
Following what was suggested in the comments, I decided to use a l2 loss function, multiplying it by the binary vector I had from before values_list_tensor. The binary vector now has values 1 and 0 instead of 1 and -1. This way when the prediction is in the range the loss is 0, else it is the normal l2 loss. As I am unable to see the values of the tensors, I am not sure if this is correct. However, I can view the final loss and it is always 0, so something is wrong here. I am unsure if the multiplication is being done correctly and if values_list_tensor is calculated accurately? Can someone help and tell me what could be wrong?
loss = tf.reduce_mean(tf.nn.l2_loss(tf.matmul(tf.transpose(tf.expand_dims(values_list_tensor, 1)), tf.expand_dims(result[0, :], 1))))
Thanks
To answer the question in the comment. One way to write a piece-wise function is using tf.cond. For example, here is a function that returns 0 in [-1, 1] and x everywhere else:
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32)
y = tf.cond(tf.logical_or(tf.greater(x, 1.0), tf.less(x, -1.0)), lambda : x, lambda : 0.0)
y.eval({x: 1.5}) # prints 1.5
y.eval({x: 0.5}) # prints 0.0

calculate the spatial dimension of a graph

Given a graph (say fully-connected), and a list of distances between all the points, is there an available way to calculate the number of dimensions required to instantiate the graph?
E.g. by construction, say we have graph G with points A, B, C and distances AB=BC=CA=1. Starting from A (0 dimensions) we add B at distance 1 (1 dimension), now we find that a 2nd dimension is needed to add C and satisfy the constraints. Does code exist to do this and spit out (in this case) dim(G) = 2?
E.g. if the points are photos, and the distances between them calculated by the Gist algorithm (http://people.csail.mit.edu/torralba/code/spatialenvelope/), I would expect the derived dimension to match the number image parameters considered by Gist.
Added: here is a 5-d python demo based on the suggestion - seemingly perfect!
'similarities' is the distance matrix.
import numpy as np
from sklearn import manifold
similarities = [[0., 1., 1., 1., 1., 1.],
[1., 0., 1., 1., 1., 1.],
[1., 1., 0., 1., 1., 1.],
[1., 1., 1., 0., 1., 1.],
[1., 1., 1., 1., 0., 1.],
[1., 1., 1., 1., 1., 0]]
seed = np.random.RandomState(seed=3)
for i in [1, 2, 3, 4, 5]:
mds = manifold.MDS(n_components=i, max_iter=3000, eps=1e-9, random_state=seed,
dissimilarity="precomputed", n_jobs=1)
print("%d %f" % (i, mds.fit(similarities).stress_))
Output:
1 3.333333
2 1.071797
3 0.343146
4 0.151531
5 0.000000
I find that when I apply this method to a subset of my data (distances between 329 pictures with '11' in the file name, using two different metrics), the stress doesn't decrease to 0 as linearly I'd expect from the above - it levels off after about 5 dimensions. (On the SURF results I tried doubling max_iter, and varying eps by an order of magnitude each way without changing results in the first four digits.)
It turns out the distances do not satisfy the triangle inequality in ~0.02% of the triangles, with the average violation roughly equal to 8% the average distance, for one metric examined.
Overall I prefer the fractal dimension of the sorted distances since it is doesn't require picking a cutoff. I'm marking the MDS response as an answer because it works for the consistent case. My results for the fractal dimension and the MDS case are below.
Another descriptive statistic turns out to be the triangle violations. Results for this further below. If anyone could generalize to higher dimensions, that would be very interesting (results and learning python :-).
MDS results, ignoring the triangle inequality issue:
N_dim stress_
SURF_match GIST_match
1 83859853704.027344 913512153794.477295
2 24402474549.902721 238300303503.782837
3 14335187473.611954 107098797170.304825
4 10714833228.199451 67612051749.697998
5 9451321873.828577 49802989323.714806
6 8984077614.154467 40987031663.725784
7 8748071137.806602 35715876839.391762
8 8623980894.453981 32780605791.135693
9 8580736361.368249 31323719065.684353
10 8558536956.142039 30372127335.209297
100 8544120093.395177 28786825401.178596
1000 8544192695.435946 28786840008.666389
Forging ahead with that to devise a metric to compare the dimensionality of the two results, an ad hoc choice is to set the criterion to
1.1 * stress_at_dim=100
resulting in the proposition that the SURF_match has a quasi-dimension in 5..6, while GIST_match has a quasi-dimension in 8..9. I'm curious if anyone thinks that means anything :-). Another question is whether there is any meaningful interpretation for the relative magnitudes of stress at any dimension for the two metrics. Here are some results to put it in perspective. Frac_d is the fractal dimension of the sorted distances, calculated according to Higuchi's method using code from IQM, Dim is the dimension as described above.
Method Frac_d Dim stress(100) stress(1)
Lab_CIE94 1.1458 3 2114107376961504.750000 33238672000252052.000000
Greyscale 1.0490 8 42238951082.465477 1454262245593.781250
HS_12x12 1.0889 19 33661589105.972816 3616806311396.510254
HS_24x24 1.1298 35 16070009781.315575 4349496176228.410645
HS_48x48 1.1854 64 7231079366.861403 4836919775090.241211
GIST 1.2312 9 28786830336.332951 997666139720.167114
HOG_250_words 1.3114 10 10120761644.659481 150327274044.045624
HOG_500_words 1.3543 13 4740814068.779779 70999988871.696045
HOG_1k_words 1.3805 15 2364984044.641845 38619752999.224922
SIFT_1k_words 1.5706 11 1930289338.112194 18095265606.237080
SURFFAST_200w 1.3829 8 2778256463.307569 40011821579.313110
SRFFAST_250_w 1.3754 8 2591204993.421285 35829689692.319153
SRFFAST_500_w 1.4551 10 1620830296.777577 21609765416.960484
SURFFAST_1k_w 1.5023 14 949543059.290031 13039001089.887533
SURFFAST_4k_w 1.5690 19 582893432.960562 5016304129.389058
Looking at the Pearson correlation between columns of the table:
Pearson correlation 2-tailed p-value
FracDim, Dim: (-0.23333296587402277, 0.40262625206429864)
Dim, Stress(100): (-0.24513480360257348, 0.37854224076180676)
Dim, Stress(1): (-0.24497740363489209, 0.37885820835053186)
Stress(100),S(1): ( 0.99999998200931084, 8.9357374620135412e-50)
FracDim, S(100): (-0.27516440489210137, 0.32091019789264791)
FracDim, S(1): (-0.27528621200454373, 0.32068731053608879)
I naively wonder how all correlations but one can be negative, and what conclusions can be drawn. Using this code:
import sys
import numpy as np
from scipy.stats.stats import pearsonr
file = sys.argv[1]
col1 = int(sys.argv[2])
col2 = int(sys.argv[3])
arr1 = []
arr2 = []
with open(file, "r") as ins:
for line in ins:
words = line.split()
arr1.append(float(words[col1]))
arr2.append(float(words[col2]))
narr1 = np.array(arr1)
narr2 = np.array(arr2)
# normalize
narr1 -= narr1.mean(0)
narr2 -= narr2.mean(0)
# standardize
narr1 /= narr1.std(0)
narr2 /= narr2.std(0)
print pearsonr(narr1, narr2)
On to the number of violations of the triangle inequality by the various metrics, all for the 329 pics with '11' in their sequence:
(1) n_violations/triangles
(2) avg violation
(3) avg distance
(4) avg violation / avg distance
n_vio (1) (2) (3) (4)
lab 186402 0.031986 157120.407286 795782.437570 0.197441
grey 126902 0.021776 1323.551315 5036.899585 0.262771
600px 120566 0.020689 1339.299040 5106.055953 0.262296
Gist 69269 0.011886 1252.289855 4240.768117 0.295298
RGB
12^3 25323 0.004345 791.203886 7305.977862 0.108295
24^3 7398 0.001269 525.981752 8538.276549 0.061603
32^3 5404 0.000927 446.044597 8827.910112 0.050527
48^3 5026 0.000862 640.310784 9095.378790 0.070400
64^3 3994 0.000685 614.752879 9270.282684 0.066314
98^3 3451 0.000592 576.815995 9409.094095 0.061304
128^3 1923 0.000330 531.054082 9549.109033 0.055613
RGB/600px
12^3 25190 0.004323 790.258158 7313.379003 0.108057
24^3 7531 0.001292 526.027221 8560.853557 0.061446
32^3 5463 0.000937 449.759107 8847.079639 0.050837
48^3 5327 0.000914 645.766473 9106.240103 0.070915
64^3 4382 0.000752 634.000685 9272.151040 0.068377
128^3 2156 0.000370 544.644712 9515.696642 0.057236
HueSat
12x12 7882 0.001353 950.321873 7555.464323 0.125779
24x24 1740 0.000299 900.577586 8227.559169 0.109459
48x48 1137 0.000195 661.389622 8653.085004 0.076434
64x64 1134 0.000195 697.298942 8776.086144 0.079454
HueSat/600px
12x12 6898 0.001184 943.319078 7564.309456 0.124707
24x24 1790 0.000307 908.031844 8237.927256 0.110226
48x48 1267 0.000217 693.607735 8647.060308 0.080213
64x64 1289 0.000221 682.567106 8761.325172 0.077907
hog
250 53782 0.009229 675.056004 1968.357004 0.342954
500 18680 0.003205 559.354979 1431.803914 0.390665
1k 9330 0.001601 771.307074 970.307130 0.794910
4k 5587 0.000959 993.062824 650.037429 1.527701
sift
500 26466 0.004542 1267.833182 1073.692611 1.180816
1k 16489 0.002829 1598.830736 824.586293 1.938949
4k 10528 0.001807 1918.068294 533.492373 3.595306
surffast
250 38162 0.006549 630.098999 1006.401837 0.626091
500 19853 0.003407 901.724525 830.596690 1.085635
1k 10659 0.001829 1310.348063 648.191424 2.021545
4k 8988 0.001542 1488.200156 419.794008 3.545072
Anyone capable of generalizing to higher dimensions? Here is my first-timer code:
import sys
import time
import math
import numpy as np
import sortedcontainers
from sortedcontainers import SortedSet
from sklearn import manifold
seed = np.random.RandomState(seed=3)
pairs = sys.argv[1]
ss = SortedSet()
print time.strftime("%H:%M:%S"), "counting/indexing"
sys.stdout.flush()
with open(pairs, "r") as ins:
for line in ins:
words = line.split()
ss.add(words[0])
ss.add(words[1])
N = len(ss)
print time.strftime("%H:%M:%S"), "size ", N
sys.stdout.flush()
sim = np.diag(np.zeros(N))
dtot = 0.0
with open(pairs, "r") as ins:
for line in ins:
words = line.split()
i = ss.index(words[0])
j = ss.index(words[1])
#val = math.log(float(words[2]))
#val = math.sqrt(float(words[2]))
val = float(words[2])
sim[i][j] = val
sim[j][i] = val
dtot += val
avgd = dtot / (N * (N-1))
ntri = 0
nvio = 0
vio = 0.0
for i in xrange(1, N):
for j in xrange(i+1, N):
d1 = sim[i][j]
for k in xrange(j+1, N):
ntri += 1
d2 = sim[i][k]
d3 = sim[j][k]
dd = d1 + d2
diff = d3 - dd
if (diff > 0.0):
nvio += 1
vio += diff
avgvio = 0.0
if (nvio > 0):
avgvio = vio / nvio
print("tot: %d %f %f %f %f" % (nvio, (float(nvio)/ntri), avgvio, avgd, (avgvio/avgd)))
Here is how I tried sklearn's Isomap:
for i in [1, 2, 3, 4, 5]:
# nbrs < points
iso = manifold.Isomap(n_neighbors=nbrs, n_components=i,
eigen_solver="auto", tol=1e-9, max_iter=3000,
path_method="auto", neighbors_algorithm="auto")
dis = euclidean_distances(iso.fit(sim).embedding_)
stress = ((dis.ravel() - sim.ravel()) ** 2).sum() / 2
Given a graph (say fully-connected), and a list of distances between all the points, is there an available way to calculate the number of dimensions required to instantiate the graph?
Yes. The more general topic this problem would be part of, in terms of graph theory, is called "Graph Embedding".
E.g. by construction, say we have graph G with points A, B, C and distances AB=BC=CA=1. Starting from A (0 dimensions) we add B at distance 1 (1 dimension), now we find that a 2nd dimension is needed to add C and satisfy the constraints. Does code exist to do this and spit out (in this case) dim(G) = 2?
This is almost exactly the way that Multidimensional Scaling works.
Multidimensional scaling (MDS) would not exactly answer the question of "How many dimensions would I need to represent this point cloud / graph?" with a number but it returns enough information to approximate it.
Multidimensional Scaling methods will attempt to find a "good mapping" to reduce the number of dimensions, say from 120 (in the original space) down to 4 (in another space). So, in a way, you can iteratively try different embeddings for increasing number of dimensions and look at the "stress" (or error) of each embedding. The number of dimensions you are after is the first number for which there is an abrupt minimisation of the error.
Due to the way it works, Classical MDS, can return a vector of eigenvalues for the new mapping. By examining this vector of eigenvalues you can determine how many of its entries you would need to retain to achieve a (good enough, or low error) representation of the original dataset.
The key concept here is the "similarity" matrix which is a fancy name for a graph's distance matrix (which you already seem to have), irrespectively of its semantics.
Embedding algorithms, in general, are trying to find an embedding that may look different but at the end of the day, the point cloud in the new space will end up having a similar (depending on how much error we can afford) distance matrix.
In terms of code, I am sure that there is something available in all major scientific computing packages but off the top of my head I can point you towards Python and MATLAB code examples.
E.g. if the points are photos, and the distances between them calculated by the Gist algorithm (http://people.csail.mit.edu/torralba/code/spatialenvelope/), I would expect the derived dimension to match the number image parameters considered by Gist
Not exactly. This is a very good use case though. In this case, what MDS would return, or what you would be probing with dimensionality reduction in general would be to check how many of these features seem to be required to represent your dataset. Therefore, depending on the scenes, or, depending on the dataset, you might realise that not all of these features are necessary for a good enough representation of the whole dataset. (In addition, you might want to have a look at this link as well).
Hope this helps.
First, you can assume that any dataset has a dimensionality of at most 4 or 5. To get more relevant dimensions, you would need one million elements (or something like that).
Apparently, you already computed a distance. Are you sure it is actually a relavnt metric? Is it efficient for images that are quite distant? Perhaps you can try Isomap (geodesic distance, starting for only close neighbors) and see if your embedded space may not actually be Euclidian.

SVM with RBF: Decision values tend to be equal to the negative of the bias term for faraway test samples

Using RBF kernel in SVM, why the decision value of test samples faraway from the training ones tend to be equal to the negative of the bias term b?
A consequence is that, once the SVM model is generated, if I set the bias term to 0, the decision value of test samples faraway from the training ones tend to 0. Why it happens?
Using the LibSVM, the bias term b is the rho. The decision value is the distance from the hyperplane.
I need to understand what defines this behavior. Does anyone understand that?
Running the following R script, you can see this behavior:
library(e1071)
library(mlbench)
data(Glass)
set.seed(2)
writeLines('separating training and testing samples')
testindex <- sort(sample(1:nrow(Glass), trunc(nrow(Glass)/3)))
training.samples <- Glass[-testindex, ]
testing.samples <- Glass[testindex, ]
writeLines('normalizing samples according to training samples between 0 and 1')
fnorm <- function(ran, data) {
(data - ran[1]) / (ran[2] - ran[1])
}
minmax <- data.frame(sapply(training.samples[, -10], range))
training.samples[, -10] <- mapply(fnorm, minmax, training.samples[, -10])
testing.samples[, -10] <- mapply(fnorm, minmax, testing.samples[, -10])
writeLines('making the dataset binary')
training.samples$Type <- factor((training.samples$Type == 1) * 1)
testing.samples$Type <- factor((testing.samples$Type == 1) * 1)
writeLines('training the SVM')
svm.model <- svm(Type ~ ., data=training.samples, cost=1, gamma=2**-5)
writeLines('predicting the SVM with outlier samples')
points = c(0, 0.8, 1, # non-outliers
1.5, -0.5, 2, -1, 2.5, -1.5, 3, -2, 10, -9) # outliers
outlier.samples <- t(sapply(points, function(p) rep(p, 9)))
svm.pred <- predict(svm.model, testing.samples[, -10], decision.values=TRUE)
svm.pred.outliers <- predict(svm.model, outlier.samples, decision.values=TRUE)
writeLines('') # printing
svm.pred.dv <- c(attr(svm.pred, 'decision.values'))
svm.pred.outliers.dv <- c(attr(svm.pred.outliers, 'decision.values'))
names(svm.pred.outliers.dv) <- points
writeLines('test sample decision values')
print(head(svm.pred.dv))
writeLines('non-outliers and outliers decision values')
print(svm.pred.outliers.dv)
writeLines('svm.model$rho')
print(svm.model$rho)
writeLines('')
writeLines('<< setting svm.model$rho to 0 >>')
writeLines('predicting the SVM with outlier samples')
svm.model$rho <- 0
svm.pred <- predict(svm.model, testing.samples[, -10], decision.values=TRUE)
svm.pred.outliers <- predict(svm.model, outlier.samples, decision.values=TRUE)
writeLines('') # printing
svm.pred.dv <- c(attr(svm.pred, 'decision.values'))
svm.pred.outliers.dv <- c(attr(svm.pred.outliers, 'decision.values'))
names(svm.pred.outliers.dv) <- points
writeLines('test sample decision values')
print(head(svm.pred.dv))
writeLines('non-outliers and outliers decision values')
print(svm.pred.outliers.dv)
writeLines('svm.model$rho')
print(svm.model$rho)
Comments about the code:
It uses a dataset of 9 dimensions.
It splits the dataset into training and testing.
It normalizes the samples between 0 and 1 for all dimensions.
It makes the problem to be binary.
It fits a SVM model.
It predicts the testing samples, getting the decision values.
It predicts some synthetic (outlier) samples outside [0, 1] in the feature space, getting the decision values.
It shows that the decision value for outliers tends to be the negative of the bias term b generated by the model.
It sets the bias term b to 0.
It predicts the testing samples, getting the decision values.
It predicts some synthetic (outlier) samples outside [0, 1] in the feature space, getting the decision values.
It shows that the decision value for outliers tends to be 0.
Do you mean negative of the bias term instead of inverse?
The decision function of the SVM is sign(w^T x - rho), where rho is the bias term , w is the weight vector, and x is the input. But thats in the primal space / linear form. w^T x is replaced by our kernel function, which in this case is the RBF kernel.
The RBF kernel is defined as . So if the distance between two things is very large, then it gets squared - we get a huge number. γ is a positive number, so we are making our huge giant value a huge giant negative value. exp(-10) is already on the order of 5*10^-5, so for far away points the RBF kernel is going to become essentailly zero. If sample is far aware from all of your training data, than all of the kernel products will be nearly zero. that means w^T x will be nearly zero. And so what you are left with is essentially sign(0-rho), ie: the negative of your bias term.

Resources