Why is the validation accuracy coming to be the baseline accuracy - efficientnet

I am using EfficientNetB0 for custom training . I get the following output
In order to debug I tried the same data for train and test and get the following error
This is not happening with other networks like Xception and Inception. What can be the reason ?
PS I have displayed only 4 epoch cases but have actually tried out up to 30 and the same problem.
I am pasting a picture of the predictions also. They are the same in each row

Related

Catboost's Incremental training with "init_model" fails when not all initial labels are present in new data

catboost python version: 1.0.6
I am training a CatboostClassifier on 10 different output classes, which works fine. Then I'm incrementally training a new classifier using the earlier trained init_model and training on a new training dataset. The catch is that this dataset has only 2 of the original 10 unique labels. Catboost warns me already with: Found only 2 unique classes in the data, but have defined 10 classes. Probably something is wrong with data.
but starts to train fine anyway. Only at the end (I assume when the model gets merged with the original one?) I get the following error message:
Exception has occurred: CatBoostError
CatBoostError: catboost/libs/model/model.cpp:1716: Approx dimensions don't match: 10 != 2
Is it expected behavior that incremental training is not possible on only a subset of the original classes? If yes, then maybe a clearer error message should be given. It would be even better though if the code could handle this case, but maybe I'm overseeing some things that do not allow such functionality.
The similar issue has been posted on github : https://github.com/catboost/catboost/issues/1953

what's the error using supply test set for prediction

I am trying to analyze the titanic dataset and build a predictive model. I have preprocessed the datasets. Now while I am trying to predict using the test set and I don't know why it doesn't show any result.
Titanic_test.arff
Titanic_train.afff
If you open the two files (training and test set) you will notice a difference: in the training set the last column has value 0 or 1, whereas in the test set it has ? (undefined).
This means that your test set doesn't contain the answers, therefore Weka cannot do any evaluation. It could do predictions though.

Pretraining error increases in each epoch in Deep Belief Network

I am using this implementation of DBN.
http://deeplearning.net/tutorial/code/DBN.py
I am using ecg data to train the model which contains 100 float values (in milivolt unit) per row.
When I run this implementation the pretraining cost goes on increasing I dont understand why.
I am attaching sample input data files and the code of the DBN where I have modified number of input and output units and batchsize. I have modified the 'load_data' code in logistic_sgd.py so I am attaching that file too.
Here is the scenario:
Why this is happening? Where I am going wrong?
Link to code and data files:
https://drive.google.com/open?id=0B02Uz-muAJWWVktyaDFOekU5Ulk

100% accuracy from libsvm

I'm training and cross-validating (10-fold) data using libSVM (with linear kernel).
The data consist 1800 fMRI intensity voxels represented as a single datapoint.
There are around 88 datapoints in the training-set-file for svm-train.
the training-set-file looks as follow:
+1 1:0.9 2:-0.2 ... 1800:0.1
-1 1:0.6 2:0.9 ... 1800:-0.98
...
I should also mention i'm using the svm-train script (came along with the libSVM package).
The problem is that when running svm-train - it's result as 100% accuracy!
This doesn't seem to reflect the true classification results!
The data isn't unbalanced since
#datapoints labeled +1 == #datpoints labeled -1
Iv'e also checked the scaler (scaling correctly), and also tried to change the labels randomly to see how it impacts the accuracy - and it's decreasing from 100% to 97.9%.
Could you please help me understand the problem?
If so, what can I do to fix it?
Thanks,
Gal Star
Make sure you include '-v 10' in the svmtrain option. I'm not sure your 100% accuracy comes from training sample or validation sample. It is very possible to get a 100% training accuracy since you have much less sample number than the feature number. But if your model suffers from overfitting, the validation accuracy may be low.

How to reavaluate model in WEKA?

I am trying to solve a numeric classification problem with numeric attributes in WEKA using linear regression and then I want to test my model on the existing dataset with ""re-evaluate model on current test dataset.
As a result of the evaluation I am getting the summary:
Correlation coefficient 0.9924
Mean absolute error 1.1017
Root mean squared error 1.2445
Total Number of Instances 17
But I don't have results as it is shown here: http://weka.wikispaces.com/Making+predictions
How to bring WEKA to the result I need?
Thank you.
To answer my question - for trained and tested model, right click on the model and go to visualize classifier error. there use save option to save actual and predicted values.
Are you using command line interface (CLI) or GUI.
If CLI, the command given in the above link works pretty fine
java weka.classifiers.trees.J48 -T unclassified.arff -l j48.model -p 0
So when you train the model you save it as *.model (j48.model) and later use it to evaluate on test data (unclassified.arff)

Resources