OneVsRestClassifier(svm.SVC()).predict() gives continous values - machine-learning

I am trying to use y_scores=OneVsRestClassifier(svm.SVC()).predict() on datasets
like iris and titanic .The trouble is that I am getting y_scores as continous values.like for iris dataset I am getting :
[[ -3.70047231 -0.74209097 2.29720159]
[ -1.93190155 0.69106231 -2.24974856]
.....
I am using the OneVsRestClassifier for other classifier models like knn,randomforest,naive bayes and they are giving appropriate results in the form of
[[ 0 1 0]
[ 1 0 1]...
etc on the iris dataset .Please help.

Well this is simply not true.
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.svm import SVC
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> clf = OneVsRestClassifier(SVC())
>>> clf.fit(iris['data'], iris['target'])
OneVsRestClassifier(estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False),
n_jobs=1)
>>> print clf.predict(iris['data'])
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 2 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
maybe you called decision_function instead (which would match your output dimension, as predict is supposed to return a vector, not a matrix). Then, SVM returns signed distances to each hyperplane, which is its decision function from mathematical perspective.

Related

How can I label connected components in APL?

I'm trying to do leet puzzle https://leetcode.com/problems/max-area-of-island/, requiring labelling connected (by sides, not corners) components.
How can I transform something like
0 0 1 0 0
0 0 0 0 0
0 1 1 0 1
0 1 0 0 1
0 1 0 0 1
into
0 0 1 0 0
0 0 0 0 0
0 2 2 0 3
0 2 0 0 3
0 2 0 0 3
I've played with the stencil ⌺ operator and also tried using scan operators but still not quite there. Can somebody help?
We can start off by enumerating the ones. We do the by applying the function ⍸ (where, but since all are 1s, it is equivalent to 1,2,3,…) # at the subset masked by ⊢ the bits themselves, i.e. ⍸#⊢:
⍸#⊢m
0 0 1 0 0
0 0 0 0 0
0 2 3 0 4
0 5 0 0 6
0 7 0 0 8
Now we need to flood-fill the lowest number in each component. We do this with repeated application until the fix-point ⍣≡ of processing Moore neighbourhoods ⌺3 3. To get the von Neumann neighbours, we reshape the 9 elements in the Moore neighbourhood into a 4-row 2-column matrix with 4 2⍴ and use ⊢/ to select the right column. We remove any 0s with 0~⍨ them prepend , the original value ⍵[2;2] (even if 0) and have ⌊/ select the smallest value:
{⌊/⍵[2;2],0~⍨⊢/4 2⍴⍵}⌺3 3⍣≡⍸#⊢m
0 0 1 0 0
0 0 0 0 0
0 2 2 0 4
0 2 0 0 4
0 2 0 0 4
We map the values to indices by finding their ⊢ indices ⍳⍨ in the unique elements of ∘∪ 0 followed by , the ravelled matrix ,:
(⊢⍳⍨∘∪0,,){⌊/⍵[2;2],0~⍨⊢/4 2⍴⍵}⌺3 3⍣≡⍸#⊢m
1 1 2 1 1
1 1 1 1 1
1 3 3 1 4
1 3 1 1 4
1 3 1 1 4
And decrement which adjusts back to begin with zero:
¯1+(⊢⍳⍨∘∪0,,){⌊/⍵[2;2],0~⍨⊢/4 2⍴⍵}⌺3 3⍣≡⍸#⊢m
0 0 1 0 0
0 0 0 0 0
0 2 2 0 3
0 2 0 0 3
0 2 0 0 3

Predict next integer in sequence using ML.NET

Given a lengthy sequence of integers in the range of 0-1 I would like to be able to predict the next likely integer.
Example dataset:
1 1 1 0 0 0 0 1 1 0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 1 0 0 0 0 1 1 1 1 0 0 0 1 0 0 1 1 0 0 0 1 0 1 1 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0
A quick look at the above perhaps shows some obvious patterns which may be recognised by an ML model.
I do have other features available in the dataset but I don't think they correlate to the integer result so the prediction should be based purely on the statistical relevance of the supplied integer dataset.
I'm unsure how to approach this using ML.NET. I have successfully classified models previously but those predictions are all made based on multiple features. In this case if I just supply a 0 or 1 there's no relevant historical sequence to aid the prediction.
How do I train an ML.NET model to return a prediction based on a range of previous data?
Working theory: the above dataset has 100 integers. I could create a class which has 100 properties (Integer0..Integer99) and painstakingly map each field and submit that but it seems really clunky.

Extra zeros appended in confusion matrix making it 3x3 instead of 2x2 using IsolationForest for Anomaly detection

I am using below code to predict anomaly detection. It is a binary classification so the confusion matrix should be 2x2 instead it is 3x3. There are extra zeros appended in T-shape. Similar thing happened using OneClassSVM few weeks back as well but I thought I was doing something wrong. Could you please help me fix this?
import numpy as np
import pandas as pd
import os
from sklearn.ensemble import IsolationForest
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
from sklearn import metrics
from sklearn.metrics import roc_auc_score
data = pd.read_csv('opensky_train.csv')
#to make sure that normal data contains no anomaly
sortedData = data.sort_values(by=['class'])
target = pd.DataFrame(sortedData['class'])
Y = target.replace(['surveill', 'other'], [1,0])
X = sortedData.drop(['class'], axis = 1)
x_normal = X.iloc[:200,:]
y_normal = Y.iloc[:200,:]
x_anomaly = X.iloc[200:,:]
y_anomaly = Y.iloc[200:,:]
Edited:
column_values = y_anomaly.values.ravel()
unique_values = pd.unique(column_values)
print(unique_values)
Output : [0 1]
clf = IsolationForest(random_state=0).fit(x_normal)
pred = clf.predict(x_anomaly)
print(pred)
Output : [ 1 1 1 1 1 1 -1 1 -1 1 1 1 1 1 1 1 1 1 1 -1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 -1 1 1 1 1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 -1 1
-1 1 1 -1 -1 1 -1 -1 1 1 1 1 -1 1 1 -1 -1 1 1 1 1 1 1 1
-1 1 1 1 1 1 1 1 1 1 -1]
#printing the results
print(confusion_matrix(y_anomaly, pred))
print (classification_report(y_anomaly, pred))
Result:
Confusion Matrix :
[[ 0 0 0]
[ 7 0 60]
[12 0 28]]
precision recall f1-score support
-1 0.00 0.00 0.00 0
0 0.00 0.00 0.00 67
1 0.32 0.70 0.44 40
accuracy 0.26 107
macro avg 0.11 0.23 0.15 107
weighted avg 0.12 0.26 0.16 107
Inliers are labeled 1, while outliers are labeled -1
Source: scikit-learn Anomaly and Outlier detection.
Your example has transformed the classes to 0,1 - so the three possible options are -1,0,1
You need to change from
Y = target.replace(['surveill', 'other'], [1,0])
to
Y = target.replace(['surveill', 'other'], [1,-1])

Transform string variable into 0-1 columns

As a very begginer in SPSS I would ask you for help with some transformation from table A into table B. I have to recode values of "brand" variable into columns and make 0-1 variables.
#table A#
nr brand
1 GREEN CARE PROFESSIONAL
1 GREEN CARE PROFESSIONAL
1 GREEN CARE PROFESSIONAL
2 HENKEL
3 HENKEL
3 HENKEL
3 HENKEL
3 VIZIR
4 BIEDRONKA
4 BOBINI
4 BOBINI
4 BOBINI
4 BOBINI
4 BOBINI
4 HENKEL
5 VIZIR
6 HENKEL
#table B#
nr GREEN HENKEL VIZIR BIEDR BOBINI
1 1 0 0 0 0
1 1 0 0 0 0
1 1 1 0 0 0
2 0 1 0 0 0
3 0 1 0 0 0
3 0 1 0 0 0
3 0 1 0 0 0
3 0 0 1 0 0
4 0 0 0 1 0
4 0 0 0 0 1
4 0 0 0 0 1
4 0 0 0 0 1
4 0 0 0 0 1
4 0 0 0 0 1
4 0 1 0 0 0
5 0 0 1 0 0
6 0 1 0 0 0
I can do it in this particular case in this simple way:
compute HENKEL=0.
...
do if BRAND='GREEN_CARE' .
compute GREEN_CARE=1.
else if ....
but the loop has to be usable with another variable and different number of values ect. I was trying to make it all day and gave up.
Do you have any idea to make it in a easy way?
Thanks!
The following syntax does the job on the sample data you provided.
First, let's recreate the sample data to demonstrate on:
Data list list/nr (f1) brand (a30).
begin data
1 "GREEN CARE PROFESSIONAL"
1 "GREEN CARE PROFESSIONAL"
1 "GREEN CARE PROFESSIONAL"
2 "HENKEL"
3 "HENKEL"
3 "HENKEL"
3 "HENKEL"
3 "VIZIR"
4 "BIEDRONKA"
4 "BOBINI"
4 "BOBINI"
4 "BOBINI"
4 "BOBINI"
4 "BOBINI"
4 "HENKEL"
5 "VIZIR"
6 "HENKEL"
end data.
dataset name originalDataset.
Now for the restructure.
sort cases by nr brand.
* creating an index to enumerate cases for each combination of `nr` and `brand`.
* This is necessary for the `casestovars` command to work later.
compute ind=1.
if $casenum>1 and lag(nr)=nr and lag(brand)=brand ind=lag(ind)+1.
exe.
* variable names can't have spaces in them, so changing the category names accordingly.
compute brand=replace(rtrim(brand)," ","_").
sort cases by nr ind brand.
compute exist=1.
casestovars /id=nr ind /index= brand/autofix=no.

Logistic Regression prediction faults

I have been trying to solve this problem of titanic survived problem. Where i splitted x to be the passengers and y to be the survived. But the problem is i couldn't able to get the y_pred (ie) prediction results. As it is 0 for all the values. I get 0 value as prediction. It would be helpful for me if anyone can solve it. As it is my first classifier problem as a beginner
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('C:/Users/Umer/train.csv')
x = df['PassengerId'].values.reshape(-1,1)
y = df['Survived']
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
x_train = sc_x.fit_transform(x_train)
x_test = sc_x.transform(x_test)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
#predicting the test set results
y_pred = classifier.predict(x_test)
I couldn't reproduce the same result, in fact, I copied-pasted your code and did not get them all zeros as you described the issue as, instead I got:
[0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0]
Nevertheless, there are a few things I noticed in your approach that you may want to know about:
The default separator in Pandas read_csv is , , so if your dataset variables separated by a tab (same like the one I have) , you then should specify the separator like this:
df = pd.read_csv('titanic.csv', sep='\t')
PassengerId has no useful information that your model may learn from in order to predict the Survived people, it's just a continuous number that increments with each new passenger. Generally speaking, in classification, you need to avail of all features that make your model learns from (unless of course there are redundant features that add no information to the model) especially in your dataset, it's a multivariate dataset.
There is no point of scaling the PassengerId, because features scaling is usually used when features highly vary in magnitudes, units and range (e.g. 5kg and 5000gms) and in your case, as I mentioned, it's just an incremental integer which has no real information to the model.
One last thing, you should get your data as type float for StandardScaler to avoid warnings like the follow:
DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.
So you do convert like this from the beginning:
x = df['PassengerId'].values.astype(float).reshape(-1,1)
Finally if you're still getting the same result, then please add a link to your dataset.
Update
After providing the dataset, it turns out that the result you're getting is correct, that's again because of reason number 2 I mentioned above (that is PassengerId provides no useful information to the model so it cannot predict correctly!)
You can test it yourself via comparing the log loss before and after adding more features from the dataset:
from sklearn.metrics import log_loss
df = pd.read_csv('train.csv', sep=',')
x = df['PassengerId'].values.reshape(-1,1)
y = df['Survived']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
y_pred_train = classifier.predict(x_train)
# calculate and print the loss function using only the PassengerId
print(log_loss(y_train, y_pred_train))
#predicting the test set results
y_pred = classifier.predict(x_test)
print(y_pred)
Output
13.33982681120802
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0]
Now by using many "supposedly-useful" information:
from sklearn.metrics import log_loss
df = pd.read_csv('train.csv', sep=',')
# denote the words female and male as 0 and 1
df['Sex'].replace(['female','male'], [0,1], inplace=True)
# try three features that you think they are informative to the model
# so it can learn from them
x = df[['Fare', 'Pclass', 'Sex']].values.reshape(-1,3)
y = df['Survived']
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.25,
random_state = 0)
classifier = LogisticRegression()
classifier.fit(x_train,y_train)
y_pred_train = classifier.predict(x_train)
# calculate and print the loss function with the above 3 features
print(log_loss(y_train, y_pred_train))
#predicting the test set results
y_pred = classifier.predict(x_test)
print(y_pred)
Output
7.238735137632405
[0 0 0 1 1 0 1 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 0 0 0
0 1 1 0 0 0 0 0 1 0 0 1 1 1 1 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 1 0
1 0 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 1 1 0 1
1 0 0 1 1 0 1 0 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 1 0 0 0 0 0
0 1 0 0 1 1 0 1 1 0 0 0 1 0 0 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 1 0 1
1]
In Conclusion:
As you can see, the loss gave better value (lesser than before) and the prediction is now more reasonable!

Resources