What is best GAN model training cifar10 in current time? - machine-learning

I want to know best GAN model for training cifar10.
I searched lots of models like DCGAN, WGAN, CGAN, SSGAN, SNGAN but it seems like I want better one.
Could you tell me what is best based on your experience or FID, IS score.
Thank you.

Here is the full leaderboard of GAN for CIFAR10(link). It is tested by Inception Score.
The current best method(or state of the art) is NCSN(paper :link, code: link).

Related

VotingRegressor vs. StackingRegressor

Here's one. In what situation would you use one vs. the other. Let me run a hypothetical.
Let's say I'm training a few different Regressors, and I get the final score from each regressor's training run. If I wanted to use the VotingRegressor to ensemble each model, I could use those scores as potential weight parameters to get a weighted average of each model's prediction right?
So what's the benefit of doing that, vs. using the StackingRegressor to get the final prediction? As I understand it, a final model is used to make its predictions based on each individual model's prediction, so in effect, wouldn't that final StackingRegressor model learn that some predictions are better than others? Almost like it's doing a sort of weight voting of its own?
Short of running both examples and seeing the differences in predictions, wondering if anyone else has experience with both of these and could provide some insight as to which might be a better way to go? I don't see a question like this on SO yet. Thanks!

How to balance your data-set when in the real world there isn't a constant balance

I can't decide how to balance my dataset on "distress situations" since it isn't something that can be measured as "the percentage of rotten apples in a factory".
For now, I've chosen to just use "50%-50%" of distress voice snippets and random none-distress snippets.
I'll be glad for some advice from the community, what are the best practises in this situation? I've chosen the 50-50 approach to avoid statistical biases and I'm using a Sequential (Keras) model.
Try to modify the loss function instead of the dataset if you cannot modify the dataset. But I think the question is not completely formulated.

Is there a way to not re-train my NN every time?

If I have to make only a/some prediction(s), do I need to re-train my NN every time? Or I can, pardon me if this is silly, "save" the training and only do the test?
Currently I'm using Pycharm, but I've seen that with other IDEs, like Spyder, you can execute selected lines of code, in that case how does the NN keeps the training without the need to re-train?
Sorry if those question are too naive.
No, you don't need to re-train your NN every time. Just save your model parameters into a file and load to make new predictions.
Are you using any machine learning framework like Tensorflow or Keras? In Keras is very easy to implement this, there are two methods, first you can save model during training using the Callbacks and second, are possible to use your_model_name.save('file_name.h5') and then load with load_model('file_name.h5) to do some predictions. Use your_model_name.prediction(x).
By the way, there is a nice guide to how you can properly save the full model architecture or model weights.
EDIT: For both methods you can use load_model, is very simple!

How can re-train my logistic model using pymc3?

I have a binary classification problem where I have around 15 features. I have chosen these features using some other model. Now I want to perform Bayesian Logistic on these features. My target classes are highly imbalance(minority class is 0.001%) and I have around 6 million records. I want to build a model which can be trained nighty or weekend using Bayesian logistic.
Currently, I have divided the data into 15 parts and then I train my model on the first part and test on the last part then I am updating my priors using Interpolated method of pymc3 and rerun the model using the 2nd set of data. I am checking the accuracy and other metrics(ROC, f1-score) after each run.
Problems:
My score is not improving.
Am I using the right approch?
This process is taking too much time.
If someone can guide me with the right approach and code snippets it will be very helpful for me.
You can use variational inference. It is faster than sampling and produces almost similar results. pymc3 itself provides methods for VI, you can explore that.
I only know this part of question. If you can elaborate your problem a bit further, maybe.. I can help you.

Why network model average could improve performance on test set?

As people train a few network models and then do model average to improve the performance the final network. Then I'd like to know why model average could work? is there any paper or explanation on this?
Actually Dropout is also model average, then why dropout could works?
People take model average so that if any of the models overfit the data, the combined model average will be able to provide a much more general prediction.

Resources