I've run a Random Forest in R: fit.rf.
All I want to know is:
When I type 'fit.rf' the output shows '% var explained'
Is the % Var explained the out of bag variance explained?
Thanks in advance.
Yes %variance explained(regression) and classification-error(class.) are OOB cross-validated. The exact formula, is not that well documented, but can be found in the print method.
Below is shown how you can inspect the print method for S3 model objects of class "randomForest"
library(randomForest)
aFunctionCopy = randomForest:::print.randomForest
fix(aFunctionCopy)
print(aFunctionCopy)
...and here a link to previous related questions and answers
Related
I have a ML.net project and as of right now everything has gone great. I have a motor that collects a power reading 256 times around each rotation and I push that into a model. Right now it determines the state of the motor nearly perfectly. The motor itself only has room for 38 values on it at a time so I have been spending several rotations to collect the full 256 samples for my training data.
I would like to cut the sample size down to 38 so every rotation I can determine its state. If I just evenly space the samples down to 38 my model degrades by a lot. I know I am not feeding the model the features it thinks are most important but just making a guess and randomly selecting data for the model.
Is there a way I can see the importance of each value in the array during the training process? I was thinking I could use IDataView for this and I found the below statement about it (link).
Standard ML schema: The IDataView system does not define, nor prescribe, standard ML schema representation. For example, it does not dictate representation of nor distinction between different semantic interpretations of columns, such as label, feature, score, weight, etc. However, the column metadata support, together with conventions, may be used to represent such interpretations.
Does this mean I can print out such things as weight for each column and how would I do that?
I have actually only been working with ML.net for a couple weeks now so I apologize if the question is naive, I assure you I have googled this as many ways as I can think to. Any advice would be appreciated. Thanks in advance.
EDIT:
Thank you for the answer I was going down a completely useless path. I have been trying to get it to work following the example you linked to. I have 260 columns with numbers and one column with the conditions as one of five text strings. This is the condition I am trying to predict.
The first time I tried it threw an error "expecting single but got string". No problem I used .Append(mlContext.Transforms.Conversion.MapValueToKey("Label", "Label")) to convert to key values and it threw the error expected Single, got Key UInt32. any ideas on how to push that into this function?
At any rate thank you for the reply but I guess my upvotes don't count yet sorry. hopefully I can upvote it later or someone else here can upvote it. Below is the code example.
//Create MLContext
MLContext mlContext = new MLContext();
//Load Data
IDataView data = mlContext.Data.LoadFromTextFile<ModelInput>(TRAIN_DATA_FILEPATH, separatorChar: ',', hasHeader: true);
// 1. Get the column name of input features.
string[] featureColumnNames =
data.Schema
.Select(column => column.Name)
.Where(columnName => columnName != "Label").ToArray();
// 2. Define estimator with data pre-processing steps
IEstimator<ITransformer> dataPrepEstimator =
mlContext.Transforms.Concatenate("Features", featureColumnNames)
.Append(mlContext.Transforms.NormalizeMinMax("Features"))
.Append(mlContext.Transforms.Conversion.MapValueToKey("Label", "Label"));
// 3. Create transformer using the data pre-processing estimator
ITransformer dataPrepTransformer = dataPrepEstimator.Fit(data);//error here
// 4. Pre-process the training data
IDataView preprocessedTrainData = dataPrepTransformer.Transform(data);
// 5. Define Stochastic Dual Coordinate Ascent machine learning estimator
var sdcaEstimator = mlContext.Regression.Trainers.Sdca();
// 6. Train machine learning model
var sdcaModel = sdcaEstimator.Fit(preprocessedTrainData);
ImmutableArray<RegressionMetricsStatistics> permutationFeatureImportance =
mlContext
.Regression
.PermutationFeatureImportance(sdcaModel, preprocessedTrainData, permutationCount: 3);
// Order features by importance
var featureImportanceMetrics =
permutationFeatureImportance
.Select((metric, index) => new { index, metric.RSquared })
.OrderByDescending(myFeatures => Math.Abs(myFeatures.RSquared.Mean));
Console.WriteLine("Feature\tPFI");
foreach (var feature in featureImportanceMetrics)
{
Console.WriteLine($"{featureColumnNames[feature.index],-20}|\t{feature.RSquared.Mean:F6}");
}
I believe what you are looking for is called Permutation Feature Importance. This will tell you which features are most important by changing each feature in isolation, and then measuring how much that change affected the model's performance metrics. You can use this to see which features are the most important to the model.
Interpret model predictions using Permutation Feature Importance is the doc that describes how to use this API in ML.NET.
You can also use an open-source set of packages, they are much more sophisticated than what is found in ML.NET. I have an example on my GitHub how-to use R with advanced explainer packages to explain ML.NET models. You can get local instance as well as global model breakdown/details/diagnostics/feature interactions etc.
https://github.com/bartczernicki/BaseballHOFPredictionWithMlrAndDALEX
I've been trying to generate human pose estimations, I came across many pretrained models (ex. Pose2Seg, deep-high-resolution-net ), however these models only include scripts for training and testing, this seems to be the norm in code written to implement models from research papers ,in deep-high-resolution-net I have tried to write a script to load the pretrained model and feed it my images, but the output I got was a bunch of tensors and I have no idea how to convert them to the .json annotations that I need.
total newbie here, sorry for my poor English in advance, ANY tips are appreciated.
I would include my script but its over 100 lines.
PS: is it polite to contact the authors and ask them if they can help?
because it seems a little distasteful.
Im not doing skeleton detection research, but your problem seems to be general.
(1) I dont think other people should teaching you from begining on how to load data and run their code from begining.
(2) For running other peoples code, just modify their test script which is provided e.g
https://github.com/leoxiaobin/deep-high-resolution-net.pytorch/blob/master/tools/test.py
They already helps you loaded the model
model = eval('models.'+cfg.MODEL.NAME+'.get_pose_net')(
cfg, is_train=False
)
if cfg.TEST.MODEL_FILE:
logger.info('=> loading model from {}'.format(cfg.TEST.MODEL_FILE))
model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE), strict=False)
else:
model_state_file = os.path.join(
final_output_dir, 'final_state.pth'
)
logger.info('=> loading model from {}'.format(model_state_file))
model.load_state_dict(torch.load(model_state_file))
model = torch.nn.DataParallel(model, device_ids=cfg.GPUS).cuda()
Just call
# evaluate on Variable x with testing data
y = model(x)
# access Variable's tensor, copy back to CPU, convert to numpy
arr = y.data.cpu().numpy()
# write CSV
np.savetxt('output.csv', arr)
You should be able to open it in excel
(3) "convert them to the .json annotations that I need".
That's the problem nobody can help. We don't know what format you want. For their format, it can be obtained either by their paper. Or looking at their training data by
X, y = torch.load('some_training_set_with_labels.pt')
By correlating the x and y. Then you should have a pretty good idea.
I have some question on BLUE Score calculation for machine translation. I realized they may have a different metrics for BLEU. I found the code reports five value for BLEU, namely BLEU-1, BLEU-2, BLEU-3, BLEU-4 and finally BLEU, which seems to be an exponential average of the previous four BLEUs. Still it is not clear to me what the difference between those is. Do you have any ideas? Thanks
P.s. At first I thought that this question is more of a theoretical content and posted it on meta stackexange. A moderator has closed and commented it as a stackoverflow type question . So please don't punish me again. =)
source: http://www.statmt.org/book/slides/08-evaluation.pdf
I haven't heard of BLEU-1 and BLEU-2 but I guess it means 1-gram, 2-gram, 3-gram and 4-gram in the formula of BLEU score, I mean in the formula precision[i] = BLEU-i in your question:
Actually, BLEU-n doesn't use the n-gram scores only. It computes the 1-gram through n-gram scores and gives them equal weight to compute a final score. See the "Cumulative N-Gram Scores" section at this link for more info.
I am doing an ARCHIVED course on ML on edx and thus i am not getting answers to my queries there - hence i am posting here.
the exercise i am working on is dimension reduction. we are required to create a function do_PCA on the ARMADILLO data. the function is also given:
def do_PCA:
pca = PCA(n_components=2, svd_solver='full')
pca.fit(armadillo)
PCA(copy=True, n_components=2, whiten=False)
T = pca.transform(armadillo)
armadillo.shape
T.shape
##print(T) ## i added this extra line
return None
My question is: Does T represent the reduced dataset or 'armadillo'? and dont we have to change the 'return None' to 'return T' or 'return armadillo' in order to later plot the reduced dataset?
hope my question makes sense!!!
once i know the answer to this, i have a subsequent question as well!!
ok.. the above part was answered by #vivekkumar....now the second part
in the same lab assignment, the instructor makes us run the following code:
t1 = datetime.datetime.now()
for i in range(5000): pca = do_PCA(armadillo)
time_delta = datetime.datetime.now() - t1
the variable time_delta is then used in scatter plot.
My question is as follows:
why is he running the do_PCA 5000x??
the instructor has a note which reads: "PCA is ran 5000x in order to help decrease the potential of rogue processes altering the speed of execution."
thanks again
I'm studying SVM and implemented this code , it's too basic,primitive and taking too much time but I just wanted to see how it actually works.Unfortunately,it is giving me bad results.What did I miss? Some coding error or mathematical mistakes? If you want to look at dataset , it's link here. I taked it from UCI Machine Learning Repository. Thanks for your deal.
def hypo(x,q):
return 1/(1+np.exp(-x.dot(q)))
data=np.loadtxt('LSVTVoice',delimiter='\t');
x=np.ones(data.shape)
x[:,1:]=data[:,0:data.shape[1]-1]
y=data[:,data.shape[1]-1]
q=np.zeros(data.shape[1])
C=0.002
##mean normalization
for i in range(q.size-1):
x[:,i+1]=(x[:,i+1]-x[:,i+1].mean())/(x[:,i+1].max()-x[:,i+1].min());
for i in range(2000):
h=x.dot(q)
for j in range(q.size):
q[j]=q[j]-(C*np.sum( -y*np.log(hypo(x,q))-(1-y)*np.log(1-hypo(x,q))) ) + (0.5*np.sum(q**2))
for i in range(y.size):
if h[i]>=0:
print y[i],'1'
else:
print y[i],'0'
Depending on your data, it's very usual that Simple Implementation of SVM give you bad result. You must try advanced version on SVM implementation(e.g Sickit SVM) you can also check this: https://github.com/scikit-learn/scikit-learn/tree/master/sklearn/svm
SVM has types of implementation and parameters like different kernels(e.g rbf). You must check them and try them with different parameter(depending on your data) and compare results to each other.
You can use Grid Search approach for comparing(check this: http://scikit-learn.org/stable/modules/grid_search.html)