representing data for usage with libsvm (sparse or not) - machine-learning

I'd like to do some data mining on functions stacktraces, for this I am using libsvm
and representing the data in a sparse format for speed of processing, each stacktrace is an instance and the variables are functions, i.e:
class1 F1,F2,F1,F456,F3
class2 F4,F4,F4,F56,F3000
...
somewhere I have an ever growing registry of seen unique functions, this is where the funcions indexes come from. Ideally, I'd like to represent the aforementioned instances using a sparse format and partitioned in 5 variables like this:
1 1:1 2:1 1:1 456:1 3:1
2 4:1 4:1 4:1 56:1 3000:1
this is not possible in libsvm's format, so I am adding the length of the total functions registry to each group to avoid index clashes, if we suppose there are 3000 functions in total:
1 1:1 3002:1 6001:1 9456:1 12003:1, this is how the first instance looks now
this works if the amount of functions doesn't change, but that is not the case, as there are new functions added every time, would have to redo whole thing.
I am using a sparse format, but suggestions are welcome on other formats too, I am able to use the data with Weka in a dense format using the function names as variables and it works, just muuch slower than with libsvm
thanks!

You have few options:
a) redo the whole thing each time (I think generating inputs for libsvm is faster than libsvm itself :))
b) use even numbers for the first thing and odd numbers for the other thing. So your example would look like:
1 2:1 3:1 1:1 911:1 5:1
This avoids collisions and you don't have to redo the whole thing :)

Related

Clustering "access-time" data sequences

I have many sequences of data looking like this:
s1 = t11, t12, ..., t1m_1
s2 = t21, t22, ..., t2m_2
...
si = ti1, ti2, ..., tim_i
si means the i-th sequence, tij means the i-th sequence be accessed at time tj
each sequence has different length of data (m_1 may not equal to m_2),
and each sequence's data means that the sequence si was accessed time at ti1, ti2, ..., tim_i.
My goal is to cluster the similar access-time sequences.
I'm not sure whether I can translate this problem to a time-series problem.
For my understanding the time-series data like that each sequence's data means the value at that time like stock data, but my sequence's value means which time the sequence be accessed.
If it can translate to time-series problem, but there is another problem. The problem is that the sequence's access time is very discrete (may be accessed at 1s, 1000s, 2000s), so if I translate to time-series format, its space would be very large, I think this can't run cluster with some algorithm like (DTW), its time complexity may too large.
As you pointed out, DTW would be quite slow, since comparing the first two series takes k * m_1 * m_2 operations.
To avoid this, and to more easily compare your sequences, you might somehow hammer them into the same format (thereby also losing information).
Here are some ideas:
Differentiate to obtain times-between-accesses, and build histograms with fixed bins across all data.
Count the number of accesses during each minute every week (and divide by number of times that minute-of-week appears in each series). Adapt to timescales of interest.
Count "number of accesses up until now". So, instead of having data points only when an access was made ("sparse"), you'd get a data point for every timestamp ("dense") showing accesses for every minute up to the current one.
#3 would be similar to an "integral image" in computer vision. After this, new summarization techniques open up, like moving averages, or even direct comparison (if the recordings happen in parallel).
In order to pick a more useful representation, you need to think about what is meaningful in your application.
After you get a uniform-length representation, you can use cheaper similarity measures. A typical one is cosine similarity (but be sure to normalize first).

Are data dependencies relevant when preparing data for neural network?

Data: When I have N rows of data like this: (x,y,z) where logically f(x,y)=z, that is z is dependent on x and y, like in my case (setting1, setting2 ,signal) . Different x's and y's can lead to the same z, but the z's wouldn't mean the same thing.
There are 30 unique setting1, 30 setting2 and 1 signal for each (setting1, setting2)-pairing, hence 900 signal values.
Data set: These [900,3] data points are considered 1 data set. I have many samples of these data sets.
I want to make a classification based on these data sets, but I need to flatten the data (make them all into one row). If I flatten it, I will duplicate all the setting values (setting1 and setting2) 30 times, i.e. I will have a row with 3x900 columns.
Question:
Is it correct to keep all the duplicate setting1,setting2 values in the data set? Or should I remove them and only include the unique values a single time?, i.e. have a row with 30 + 30 + 900 columns. I'm worried, that the logical dependency of the signal to the settings will be lost this way. Is this relevant? Or shouldn't I bother including the settings at all (e.g. due to correlations)?
If I understand correctly, you are training NN on a sample where each observation is [900,3].
You are flatning it and getting an input layer of 3*900.
Some of those values are a result of a function on others.
It is important which function, as if it is a liniar function, NN might not work:
From here:
"If inputs are linearly dependent then you are in effect introducing
the same variable as multiple inputs. By doing so you've introduced a
new problem for the network, finding the dependency so that the
duplicated inputs are treated as a single input and a single new
dimension in the data. For some dependencies, finding appropriate
weights for the duplicate inputs is not possible."
Also, if you add dependent variables you risk the NN being biased towards said variables.
E.g. If you are running LMS on [x1,x2,x3,average(x1,x2)] to predict y, you basically assign a higher weight to the x1 and x2 variables.
Unless you have a reason to believe that those weights should be higher, don't include their function.
I was not able to find any link to support, but my intuition is that you might want to decrease your input layer in addition to omitting the dependent values:
From professor A. Ng's ML Course I remember that the input should be the minimum amount of values that are 'reasonable' to make the prediction.
Reasonable is vague, but I understand it so: If you try to predict the price of a house include footage, area quality, distance from major hub, do not include average sun spot activity during the open home day even though you got that data.
I would remove the duplicates, I would also look for any other data that can be omitted, maybe run PCA over the full set of Nx[3,900].

Numerically representing Nominal Data whilst retaining data semantics

I have a dataset of nominal and numerical features. I want to be able to represent this dataset entirely numerically if possible.
Ideally I would be able to do this for an n-ary nominal feature. I realize that in the binary case, one could represent the two nominal values with integers. However, when a nominal feature can have many permutations, how would this be possible, if at all?
There are a number of techniques to "embed" categorical attributes as numbers.
For example, given a categorical variable that can take the values red, green and blue, we can trivially encode this as three attributes isRed={0,1}, isGreen={0,1} and isBlue={0,1}.
While this is popular, and will obviously "work", many people fall for the fallacy of assuming that afterwards numerical processing techniques will produce sensible results.
If you run e.g. k-means on a dataset encoded this way, the result will likely not be too meaningful afterwards. In particular, if you get a mean such as isRed=.3 isGreen=.2 isBlue=.5 - you cannot reasonably map this back to the original data. Worse, with some algorithms you may even get isRed=0 isGreen=0 isBlue=0.
I suggest that you try to work on your actual data, and avoid encoding as much as possible. If you have a good tool, it will allow you to use mixed data types. Don't try to make everything a numerical vector. This mathematical view of data is quite limited and the data will not give you all the mathematical assumptions that you need to benefit from this view (e.g. metric spaces).
Don't do this: I'm trying to encode certain nominal attributes as integers.
Except if there is only two permutations for a nominal feature. It is ok to use any different integers (for example 1 and 3) for each.
But if there is more than two permutations, integers can not be used. Lets say we assigned 1, 2 and 3 to three permutations. As we can see, there is higher relation between 1-2 and 2-3 than 1-3 because of differences.
Rather, use a separate binary feature for each value of each nominal attribute. Thus, the answer of your question: It is not possible/wisely.
If you use pandas, you can use a function called .get_dummies() on your nominal value column. This will turn the column of N unique values into N (or if you want N-1, called drop_first) new columns indicating with either a 1 or a 0 if a value is present.
Example:
s = pd.Series(list('abca'))
get_dummies(s)
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0

Kohonen Self Organizing Maps: Determining the number of neurons and grid size

I have a large dataset I am trying to do cluster analysis on using SOM. The dataset is HUGE (~ billions of records) and I am not sure what should be the number of neurons and the SOM grid size to start with. Any pointers to some material that talks about estimating the number of neurons and grid size would be greatly appreciated.
Thanks!
Quoting from the som_make function documentation of the som toolbox
It uses a heuristic formula of 'munits = 5*dlen^0.54321'. The
'mapsize' argument influences the final number of map units: a 'big'
map has x4 the default number of map units and a 'small' map has
x0.25 the default number of map units.
dlen is the number of records in your dataset
You can also read about the classic WEBSOM which addresses the issue of large datasets
http://www.cs.indiana.edu/~bmarkine/oral/self-organization-of-a.pdf
http://websom.hut.fi/websom/doc/ps/Lagus04Infosci.pdf
Keep in mind that the map size is also a parameter which is also application specific. Namely it depends on what you want to do with the generated clusters. Large maps produce a large number of small but "compact" clusters (records assigned to each cluster are quite similar). Small maps produce less but more generilized clusters. A "right number of clusters" doesn't exists, especially in real world datasets. It all depends on the detail which you want to examine your dataset.
I have written a function that, with the data set as input, returns the grid size. I rewrote it from the som_topol_struct() function of Matlab's Self Organizing Maps Toolbox into a R function.
topology=function(data)
{
#Determina, para lattice hexagonal, el número de neuronas (munits) y su disposición (msize)
D=data
# munits: número de hexágonos
# dlen: número de sujetos
dlen=dim(data)[1]
dim=dim(data)[2]
munits=ceiling(5*dlen^0.5) # Formula Heurística matlab
#munits=100
#size=c(round(sqrt(munits)),round(munits/(round(sqrt(munits)))))
A=matrix(Inf,nrow=dim,ncol=dim)
for (i in 1:dim)
{
D[,i]=D[,i]-mean(D[is.finite(D[,i]),i])
}
for (i in 1:dim){
for (j in i:dim){
c=D[,i]*D[,j]
c=c[is.finite(c)];
A[i,j]=sum(c)/length(c)
A[j,i]=A[i,j]
}
}
VS=eigen(A)
eigval=sort(VS$values)
if (eigval[length(eigval)]==0 | eigval[length(eigval)-1]*munits<eigval[length(eigval)]){
ratio=1
}else{
ratio=sqrt(eigval[length(eigval)]/eigval[length(eigval)-1])}
size1=min(munits,round(sqrt(munits/ratio*sqrt(0.75))))
size2=round(munits/size1)
return(list(munits=munits,msize=sort(c(size1,size2),decreasing=TRUE)))
}
hope it helps...
Iván Vallés-Pérez
I don't have a reference for it, but I would suggest starting off by using approximately 10 SOM neurons per expected class in your dataset. For example, if you think your dataset consists of 8 separate components, go for a map with 9x9 neurons. This is completely just a ballpark heuristic though.
If you'd like the data to drive the topology of your SOM a bit more directly, try one of the SOM variants that change topology during training:
Growing SOM
Growing Neural Gas
Unfortunately these algorithms involve even more parameter tuning than plain SOM, but they might work for your application.
Kohenon has written on the issue of selecting parameters and map size for SOM in his book "MATLAB Implementations and Applications of the Self-Organizing Map". In some cases, he suggest the initial values can be arrived at after testing several sizes of the SOM to check that the cluster structures were shown with sufficient resolution and statistical accuracy.
my suggestion would be the following
SOM is distantly related to correspondence analysis. In statistics, they use 5*r^2 as a rule of thumb, where r is the number of rows/columns in a square setup
usually, one should use some criterion that is based on the data itself, meaning that you need some criterion for estimating the homogeneity. If a certain threshold would be violated, you would need more nodes. For checking the homogeneity you would need some records per node. Agai, from statistics you could learn that for simple tests (small number of variables) you would need around 20 records, for more advanced tests on some variables at least 8 records.
remember that the SOM represents a predictive model. So validation is the key, absolutely mandatory. Yet, validation of predictive models (see typeI / II error entry in Wiki) is a subject on its own. And the acceptable risk as well as the risk structure also depend fully on your purpose.
You may test the dynamics of the error rate of the model by reducing its size more and more. Then take the smallest one with acceptable error.
It is a strength of the SOM to allow for empty nodes. Yet, there should not be too much of them. Let me say, less than 5%.
Taken all together, from experience, I would recommend the following criterion a minimum of the absolute number of 8..10 records, but those should not be more than 5% of all clusters.
Those 5% rule is of of course a heuristics, which however can be justified by the general usage of the confidence level in statistical tests. You may choose any percentage from 1% to 5%.

LIBSVM: Get support vectors from model file

This may be a weird request so some explanation first. I recently had a sudden hd crash and lost a data file I was using to generate model files with libSVM. I do have the SVM model and scaling file that I generated from this data file and I was wondering if there is a way to generate a data file from the Support Vectors in the model file, something like model_sv_to_instances(model, &instances) since thhe process for obtaining instances is very costly. (I know it won't be the same as the original but still it's better than nothing) I'm using a probabilistic SVM with RBF kernel.
If you open a given model file in any text editor you would find something like this:
svm_type c_svc
kernel_type sigmoid
gamma 0.5
coef0 0
nr_class 2
total_sv 4
rho 0
label 0 1
nr_sv 2 2
SV
1 1:0 2:0
1 1:1 2:1
-1 1:1 2:0
-1 1:0 2:1
Where the interesting thing for you is after the line with SV.
1 1:0 2:0
1 1:1 2:1
-1 1:1 2:0
-1 1:0 2:1
Those are data points that were selected as support vectors, so you just have to parse the file. The format is as follows :
[label] [index1]:[value1] [index2]:[value2] ... [indexn][valuen]
For instance, from my example you can conclude that my training set was:
x y desired val
0 0 -1
0 1 1
1 0 1
1 1 -1
A few considerations and warnings. The ratio between number of SVs and data points depends on the parameters that you used. In some cases the ratio is big and you would have very few SVs in comparison with your data.
Another thing to keep in mind is that this reduction is likely to change the problem and if you train again just with SVs as data points you would probably get a complete different model with a complete different set of parameters.
Good luck!
In the case of RBF you are lucky. According to the libsvm FAQ you can extract the support vectors from the model file:
In the model file, after parameters and other informations such as labels , each line represents a support vector.
But remember, these are only the support vectors, which are only a fraction of your original input data.
To the best of my knowledge, SVM models in general, and libSVM models in particular, consist of only the support vectors. These vectors represent the borderline between the classes; most probably, they don't represent the vast majority of your data points. So, unfortunately, I don't think there's a way to regenerate your data from the model.
Having said that, I can think of an esoteric case where there might be some value to the model: there are companies specializing in recovering data in such cases (e.g. from crashed HDs). However, the recovered data sometimes has gaps; in certain cases, the model might be reverse-engineered to fill-in some missing spots. However, this is very theoretic.
EDIT: as the other answers state, the proportion of data points represented by the support vectors might vary, depending on the specific problem and parameters. However, as stated above, in most common cases, you'll be able to reconstruct only a small fraction of your original data set.

Resources