How to obtain elastic net regression coefficients from model created with the caret package train function - r-caret

I am wondering how can I obtain the final model regression coefficients for an elastic net model fitted using the train function from the caret package.
I tried using
coef(enetModelObj$finalModel)
but this gets me Null.
Thanks for your help.
LG

Related

Generator and discriminator loss curves are exact mirror images

I am currently training a GAN using Pytorch to produce histopathology data for my research. I am using BCE criterion for both Generator and Discriminator. The network is able to produce good quality images but the loss curves are bit mysterious for me.
The generator and discriminator loss curves look like exact mirror images. See the attached tensor-board snip. Can someone tell me why this is happening?
Edit 1: Both generator and discriminator loss curves should show convergence, right?
Thanks a lot in advance!
The training curve you produce is somehow standard when training a GAN. The Generator and Discriminator are going to converge. If you plot the Gen Loss and Dis Loss together, you'll find out adversarial property. In fact, most of the time, validating the model by looking at the generated image is an efficient way.
Here are some of my works for your reference.

Does auto-differentiation in tensorflow work when combining activations from multiple nets into one objective?

I am new to tensorflow and trying to figure out if the auto-differentiation feature in tensorflow will solve my problem.
So I have two nets , where each net outputs a latent vector. So let's say my net A outputs latent vector -La(Hxr) - where (H,r) represent the dimensions of the output latent vector La. Similarly net B outputs Lb(Wxr) . So my objective functions takes both the two latents as input and combines them like (La.Lb') where (.) is dot product and (') represents the transpose . I will be optimizing this objective function using cross-entropy.
Now my question is will the tensor-flow auto-diff be able to calculate the gradients correctly and back propagate? It's not a straight forward case here. The net A should only be updated from the gradients w.r.t. La and net B should only be updated with gradients calculated w.r.t Lb. So is tensorflow smart enough to figure out that? And is there a way to validate this?
Thanks!
TensorFlow supports auto-differentiation of whatever kind of computational graph you can define using it. I have used TensorFlow to combine the predictions from multiple Nets to compute the loss using different loss functions. So, tensorflow is smart enough to figure out this, it will compute the gradients correctly and back propagate it.

OpenCV SVM: Update a trained SVM

I am using OpenCV 2.4.6 ( the only Version which is available on this PC).
So I have a SVM Modell with the default parameters which is already trained. Now I want to update it with new Samples.
CvSVM Support_VectorMachine;
Support_Vector_Machine.load(FilePath);
struct CvSVMParams Parameter;
Parameter = Support_Vector_Machine.get_params();
Support_Vector_Machine.train(TrainMatrix, Labels, cv::Mat(), cv::Mat(), Parameter);
So the Problem is, as mentioned in the OpenCV Statistical Models Dokumentation, that the train method calls the CvStatModel::clear() method, so my trained model gets overwritten.
Is there any solution or do I have to use a newer Version of open CV or another library for machine learning?
Thanks for your help.
SVM is not an online algorithm. This means that it doesn't support incremental learning which is what you are trying to do. So if you want to add new points you must retrain the model again.
There are some variations of SVM that support online learning (i.e Pegasos SVM), but I don't think OpenCV implement them.

How do I update a trained model (weka.classifiers.functions.MultilayerPerceptron) with new training data in Weka?

I would like to load a model I trained before and then update this model with new training data. But I found this task hard to accomplish.
I have learnt from Weka Wiki that
Classifiers implementing the weka.classifiers.UpdateableClassifier interface can be trained incrementally.
However, the regression model I trained is using weka.classifiers.functions.MultilayerPerceptron classifier which does not implement UpdateableClassifier.
Then I checked the Weka API and it turns out that no regression classifier implements UpdateableClassifier.
How can I train a regression model in Weka, and then update the model later with new training data after loading the model?
I have some data mining experience in Weka as well as in scikit-learn and r and updateble regression models do not exist in weka and scikit-learn as far as I know. Some R libraries however do support updating regression models (take a look at this linear regression model for example: http://stat.ethz.ch/R-manual/R-devel/library/stats/html/update.html), so if you are free to switching data mining tool this might help you out.
If you need to stick to Weka than I'm afraid that you would probably need to implement such a model yourself, but since I'm not a complete Weka expert please check with the guys at weka list (http://weka.wikispaces.com/Weka+Mailing+List).
The SGD classifier implementation in Weka supports multiple loss functions. Among them are two loss functions that are meant for linear regression, viz. Epsilon insensitive, and Huber loss functions.
Therefore one can use a linear regression trained with SGD as long as either of these two loss functions are used to minimize training error.

how to use libsvm model file in opencv

I am developing an OCR using SVM in opencv C++. SVM used is (one vs one multi-class) linear SVM. Opencv svm (multi-class)doesn't give probability estimates for each class that was used in time of training. So i tried my luck with libsvm Multi-class classification (and probability output) via error-correcting codes. It gave me the probability estimates for each of the class, now I want to use the training model file in opencv C++. I get an error. Now my problem is how to use the training model in opencv, if not possible how to get probability estimates for each of the class using (one vs one multi-class) linear svm ?

Resources