I use Deeplearning4j to classify equipment names. I marked ~ 50,000 items with 495 classes, and I use this data to train the neural network.
That is, as input, I provide a set of vectors (50,000) consisting of 0 and 1, and the expected class for each vector (0 to 494).
I use the IrisClassifier example as a basis for the code.
I saved the trained model to a file, and now I can use it to predict the class of equipment.
As an example, I tried to use for prediction the same data (50,000 items) that I used for training, and compare the prediction with my markup of this data.
The result turned out to be very good, the error of the neural network is ~ 1%.
After that, I tried to use for prediction the first 100 vectors from these 50,000 records, and removed the rest 49900.
And for these 100 vectors, the prediction is different when compared with the prediction for the same 100 vectors in the composition of 50,000.
That is, the less data we provide to the trained model, the greater the prediction error.
Even for exactly the same vectors.
Why does this happen?
My code.
Training:
//First: get the dataset using the record reader. CSVRecordReader handles loading/parsing
int numLinesToSkip = 0;
char delimiter = ',';
RecordReader recordReader = new CSVRecordReader(numLinesToSkip,delimiter);
recordReader.initialize(new FileSplit(new File(args[0])));
//Second: the RecordReaderDataSetIterator handles conversion to DataSet objects, ready for use in neural network
int labelIndex = 3331;
int numClasses = 495;
int batchSize = 4000;
// DataSetIterator iterator = new RecordReaderDataSetIterator(recordReader,batchSize,labelIndex,numClasses);
DataSetIterator iterator = new RecordReaderDataSetIterator.Builder(recordReader, batchSize).classification(labelIndex, numClasses).build();
List<DataSet> trainingData = new ArrayList<>();
List<DataSet> testData = new ArrayList<>();
while (iterator.hasNext()) {
DataSet allData = iterator.next();
allData.shuffle();
SplitTestAndTrain testAndTrain = allData.splitTestAndTrain(0.8); //Use 80% of data for training
trainingData.add(testAndTrain.getTrain());
testData.add(testAndTrain.getTest());
}
DataSet allTrainingData = DataSet.merge(trainingData);
DataSet allTestData = DataSet.merge(testData);
//We need to normalize our data. We'll use NormalizeStandardize (which gives us mean 0, unit variance):
DataNormalization normalizer = new NormalizerStandardize();
normalizer.fit(allTrainingData); //Collect the statistics (mean/stdev) from the training data. This does not modify the input data
normalizer.transform(allTrainingData); //Apply normalization to the training data
normalizer.transform(allTestData); //Apply normalization to the test data. This is using statistics calculated from the *training* set
long seed = 6;
int firstHiddenLayerSize = labelIndex/6;
int secondHiddenLayerSize = firstHiddenLayerSize/4;
//log.info("Build model....");
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(seed)
.activation(Activation.TANH)
.weightInit(WeightInit.XAVIER)
.updater(new Sgd(0.1))
.l2(1e-4)
.list()
.layer(new DenseLayer.Builder().nIn(labelIndex).nOut(firstHiddenLayerSize)
.build())
.layer(new DenseLayer.Builder().nIn(firstHiddenLayerSize).nOut(secondHiddenLayerSize)
.build())
.layer( new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
.activation(Activation.SOFTMAX) //Override the global TANH activation with softmax for this layer
.nIn(secondHiddenLayerSize).nOut(numClasses).build())
.build();
//run the model
MultiLayerNetwork model = new MultiLayerNetwork(conf);
model.init();
//record score once every 100 iterations
model.setListeners(new ScoreIterationListener(100));
for(int i=0; i<5000; i++ ) {
model.fit(allTrainingData);
}
//evaluate the model on the test set
Evaluation eval = new Evaluation(numClasses);
INDArray output = model.output(allTestData.getFeatures());
eval.eval(allTestData.getLabels(), output);
log.info(eval.stats());
// Save the Model
File locationToSave = new File(args[1]);
model.save(locationToSave, false);
Prediction:
// Open the network file
File locationToLoad = new File(args[0]);
MultiLayerNetwork model = MultiLayerNetwork.load(locationToLoad, false);
model.init();
// First: get the dataset using the record reader. CSVRecordReader handles loading/parsing
int numLinesToSkip = 0;
char delimiter = ',';
// Data to predict
CSVRecordReader recordReader = new CSVRecordReader(numLinesToSkip, delimiter); //skip no lines at the top - i.e. no header
recordReader.initialize(new FileSplit(new File(args[1])));
//Second: the RecordReaderDataSetIterator handles conversion to DataSet objects, ready for use in neural network
int batchSize = 4000;
DataSetIterator iterator = new RecordReaderDataSetIterator.Builder(recordReader, batchSize).build();
List<DataSet> dataSetList = new ArrayList<>();
while (iterator.hasNext()) {
DataSet allData = iterator.next();
dataSetList.add(allData);
}
DataSet dataSet = DataSet.merge(dataSetList);
DataNormalization normalizer = new NormalizerStandardize();
normalizer.fit(dataSet);
normalizer.transform(dataSet);
// Now use it to classify some data
INDArray output = model.output(dataSet.getFeatures());
// Save result
BufferedWriter writer = new BufferedWriter(new FileWriter(args[2], true));
for (int i=0; i<output.rows(); i++) {
writer
.append(output.getRow(i).argMax().toString())
.append(" ")
.append(String.valueOf(i))
.append(" ")
.append(output.getRow(i).toString())
.append('\n');
}
writer.close();
Ensure you save the normalizer as follows alongside the model:
import org.nd4j.linalg.dataset.api.preprocessor.serializer.NormalizerSerializer;
NormalizerSerializer SUT = NormalizerSerializer.getDefault();
SUT.write(normalizer,new File("outputFile.bin"));
NormalizeStandardize restored = SUT.restore(new File("outputFile.bin");
You need to use the same normalizer data for both training and prediction. Otherwise it will use wrong statistics when transforming your data.
The way you are currently doing it, results in data that looks very different from the training data, that is why you get such a different result.
Related
I am using the following code to create my machine learning model. The accuracy of the model is 0.76. I am just curious to know which records from my test data failed? Is there a way I can see those data?
// 1. Load the dataset for training and testing
var trainData = ctx.Data.LoadFromTextFile<SentimentData>(trainDataPath, hasHeader: true);
var testData = ctx.Data.LoadFromTextFile<SentimentData>(testDataPath, hasHeader: true);
// 2. Build a tranformer/estimator to transform input data so that Machine Learning algorithm can understand
IEstimator<ITransformer> estimator = ctx.Transforms.Text.FeaturizeText("Features", nameof(SentimentData.Text));
// 3. - set the training algorithm and create the pipeline for model builder
var trainer = ctx.BinaryClassification.Trainers.SdcaLogisticRegression();
var trainingPipeline = estimator.Append(trainer);
// 4. - Train the model
var trainedModel = trainingPipeline.Fit(trainData);
// 5. - Perform the preditions on the test data
var predictions = trainedModel.Transform(testData);
// 6. - Evalute the model
var metrics = ctx.BinaryClassification.Evaluate(data: predictions);
By using the GetColumn and CreateEnumerable methods, you can find the data that the model didn't predict correctly.
After you the metrics, use the GetColumn method on the predictions that were from the test data set to get the original label values. Then, use the CreateEnuemrable method to get the predictions that will hold the predicted values. Optionally, you can get the sentiment text as well.
var originalLabels = predictions.GetColumn<bool>("Label").ToArray();
var sentimentText = predictions.GetColumn<string>(nameof(SentimentData.SentimentText)).ToArray();
var predictedLabels = context.Data.CreateEnumerable<SentimentPrediction>(predictions, reuseRowObject: false).ToArray();
After getting the data, just loop through one of them (I did a count of the original labels) and you can access the data at each iteration. From there you can check if the actual label doesn't equal the predicted value to only print out the values that the model didn't get correctly.
for (int i = 0; i < originalLabels.Count(); i++)
{
string outputText = String.Empty;
if (originalLabels[i] != predictedLabels[i].Prediction)
{
outputText = $"Text - {sentimentText[i]} | ";
outputText += $"Original - {originalLabels[i]} | ";
outputText += $"Predicted - {predictedLabels[i].Prediction}";
Console.WriteLine(outputText);
}
}
With that you have the data that you need. :)
Hope that helps!
From your comment, I believe the method you are looking for can be found in the keras library. The method should be keras.models.predict_classes as found on their documentation page.
This will provide you with an array of predicted outputs, which you can then compare to the ground truths. Visit the documentation to see the parameters.
Hope this helps!
I'm trying to implement something like this https://www.youtube.com/watch?v=Fp9kzoAxsA4 which is a GANN (Genetic Algorithm Neural Network) using DL4J library.
Genetic learning variables:
Genes: Creature Neural Network weights
Fitness: Total distance moved.
Neural network layers for every creature:
input layer: 5 sensors that either 1 if there's a wall in the sensor direction or 0 if not.
output layer: Linear output that maps to the angle of the creature.
This is my createBrain method for the creature object:
private void createBrain() {
Layer inputLayer = new DenseLayer.Builder()
// 5 eye sensors
.nIn(5)
.nOut(5)
// How do I initialize custom weights using creature genes (this.genes)?
// .weightInit(WeightInit.ZERO)
.activation(Activation.RELU)
.build();
Layer outputLayer = new OutputLayer.Builder()
.nIn(5)
.nOut(1)
.activation(Activation.IDENTITY)
.lossFunction(LossFunctions.LossFunction.MSE)
.build();
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(6)
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.iterations(1)
.learningRate(0.006)
.updater(Updater.NESTEROVS).momentum(0.9)
.list()
.layer(0,inputLayer)
.layer(1, outputLayer)
.pretrain(false).backprop(true)
.build();
this.brain = new MultiLayerNetwork(conf);
this.brain.init();
}
If it might help I have pushed to this repo
https://github.com/kareem3d/GeneticNeuralNetwork
And this is the Creature class
https://github.com/kareem3d/GeneticNeuralNetwork/blob/master/src/main/java/com/mycompany/gaan/Creature.java
I'm a machine learning student so if you see any obvious mistakes please let me know, thanks :)
I don't know if you can set weights in the layer configuration(I couldn't see in the API docs) but you can get and set to network parameters after initializing model.
To set them individually for layers you may follow this example;
Iterator paramap_iterator = convolutionalEncoder.paramTable().entrySet().iterator();
while(paramap_iterator.hasNext()) {
Map.Entry<String, INDArray> me = (Map.Entry<String, INDArray>) paramap_iterator.next();
System.out.println(me.getKey());//print key
System.out.println(Arrays.toString(me.getValue().shape()));//print shape of INDArray
convolutionalEncoder.setParam(me.getKey(), Nd4j.rand(me.getValue().shape()));//set some random values
}
If you want set all parameters of network at once you can use setParams() and params(), for example;
INDArray all_params = convolutionalEncoder.params();
convolutionalEncoder.setParams(Nd4j.rand(all_params.shape()));//set random values with the same shape
You can check API for more information;
https://deeplearning4j.org/doc/org/deeplearning4j/nn/api/Model.html#params--
It worked for me:
int inputNum = 4;
int outputNum = 3;
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(123)
.layer(new EmbeddingLayer.Builder()
.nIn(inputNum) // Number of input datapoints.
.nOut(8) // Number of output datapoints.
.activation(Activation.RELU) // Activation function.
.weightInit(WeightInit.XAVIER) // Weight initialization.
.build())
.list()
.layer(new DenseLayer.Builder()
.nIn(inputNum) // Number of input datapoints.
.nOut(8) // Number of output datapoints.
.activation(Activation.RELU) // Activation function.
.weightInit(WeightInit.XAVIER) // Weight initialization.
.build())
.layer(new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
.nIn(8)
.nOut(outputNum)
.activation(Activation.SOFTMAX)
.weightInit(WeightInit.XAVIER)
.build())
.pretrain(false).backprop(false)
.build();
MultiLayerNetwork multiLayerNetwork = new MultiLayerNetwork(conf);
multiLayerNetwork.init();
Map<String, INDArray> paramTable = multiLayerNetwork.paramTable();
Set<String> keys = paramTable.keySet();
Iterator<String> it = keys.iterator();
while (it.hasNext()) {
String key = it.next();
INDArray values = paramTable.get(key);
System.out.print(key+" ");//print keys
System.out.println(Arrays.toString(values.shape()));//print shape of INDArray
System.out.println(values);
multiLayerNetwork.setParam(key, Nd4j.rand(values.shape()));//set some random values
}
I am a beginner with weka.
I have managed to import dataset from the disk (one folder by category, all text related to this category inside the folder), apply StringToWordVector with tokenizer, train a Naive Multniomial categorizer ... The code is below (it is c# but Java is ok of course)
However, I can hardly find information on how to use the categorizer on a project. Say I have a text with unknown category, input by a user, how can I just apply the categorizer to this text and infer the category it belongs to ? (code "// what to do here below").
Any help would be greatly appreciated ;-)
Thanks in advance
Julien
string filepath = #"C:\Users\Julien\Desktop\Meal\";
ClassificationDatasetHelper classHelper = new ClassificationDatasetHelper();
weka.core.converters.TextDirectoryLoader tdl = new
weka.core.converters.TextDirectoryLoader();
tdl.setDirectory(new java.io.File(filepath));
tdl.setCharSet("UTF-8");
weka.core.Instances insts = tdl.getDataSet();
weka.filters.unsupervised.attribute.StringToWordVector swv = new weka.filters.unsupervised.attribute.StringToWordVector();
swv.setInputFormat(insts);
swv.setDoNotOperateOnPerClassBasis(false);
swv.setOutputWordCounts(true);
swv.setWordsToKeep(1000);
swv.setIDFTransform(true);
swv.setMinTermFreq(1);
swv.setDoNotOperateOnPerClassBasis(false);
swv.setPeriodicPruning(-1);
weka.core.tokenizers.NGramTokenizer tokenizer = new weka.core.tokenizers.NGramTokenizer();
tokenizer.setNGramMinSize(2);
tokenizer.setNGramMaxSize(2);
swv.setTokenizer(tokenizer);
insts = weka.filters.Filter.useFilter(insts, swv);
insts.setClassIndex(0);
weka.classifiers.Classifier cl = new weka.classifiers.bayes.NaiveBayesMultinomial();
int trainSize = insts.numInstances() * percentSplit / 100;
int testSize = insts.numInstances() - trainSize;
weka.core.Instances train = new weka.core.Instances(insts, 0, trainSize);
cl.buildClassifier(train);
string s = "Try to classify this text";
weka.core.Instance instanceToClassify = new weka.core.Instance();
// what to do here
// ???
double predictedClass = cl.classifyInstance(instanceToClassify);
Thanks
The best place to learn how to use Weka in your Java app is in the official Weka wiki.
https://waikato.github.io/weka-wiki/use_weka_in_your_java_code/
Basically, you provide a new dataset (the classifier will ignore the category attribute) and you ask it to label each instance for you, like this
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileReader;
import java.io.FileWriter;
import weka.core.Instances;
...
// load unlabeled data
Instances unlabeled = new Instances(
new BufferedReader(
new FileReader("/some/where/unlabeled.arff")));
// set class attribute
unlabeled.setClassIndex(unlabeled.numAttributes() - 1);
// create copy
Instances labeled = new Instances(unlabeled);
// label instances
for (int i = 0; i < unlabeled.numInstances(); i++) {
double clsLabel = tree.classifyInstance(unlabeled.instance(i));
labeled.instance(i).setClassValue(clsLabel);
}
// save labeled data
BufferedWriter writer = new BufferedWriter(
new FileWriter("/some/where/labeled.arff"));
writer.write(labeled.toString());
writer.newLine();
writer.flush();
writer.close();
I am trying to extract topic from 7 millons of Twitter data. I have assumed each tweet as a document. So, I stored all tweets in a file where each line (or tweet) treated as a document. I used this file as a input file for Mallet api.
public static void LDAModel(int numofK,int numbofIteration,int numberofThread,String outputDir,InstanceList instances) throws Exception
{
// Create a model with 100 topics, alpha_t = 0.01, beta_w = 0.01
// Note that the first parameter is passed as the sum over topics, while
// the second is the parameter for a single dimension of the Dirichlet prior.
int numTopics = numofK;
ParallelTopicModel model = new ParallelTopicModel(numTopics, 1.0, 0.01);
model.addInstances(instances);
// Use two parallel samplers, which each look at one half the corpus and combine
// statistics after every iteration.
model.setNumThreads(numberofThread);
// Run the model for 50 iterations and stop (this is for testing only,
// for real applications, use 1000 to 2000 iterations)
model.setNumIterations(numbofIteration);
model.estimate();
// Show the words and topics in the first instance
// The data alphabet maps word IDs to strings
Alphabet dataAlphabet = instances.getDataAlphabet();
FeatureSequence tokens = (FeatureSequence) model.getData().get(0).instance.getData();
LabelSequence topics = model.getData().get(0).topicSequence;
Formatter out = new Formatter(new StringBuilder(), Locale.US);
for (int position = 0; position < tokens.getLength(); position++) {
// out.format("%s-%d ", dataAlphabet.lookupObject(tokens.getIndexAtPosition(position)), topics.getIndexAtPosition(position));
out.format("%s-%d ", dataAlphabet.lookupObject(tokens.getIndexAtPosition(position)), topics.getIndexAtPosition(position));
}
System.out.println(out);
// Estimate the topic distribution of the first instance,
// given the current Gibbs state.
double[] topicDistribution = model.getTopicProbabilities(0);
// Get an array of sorted sets of word ID/count pairs
ArrayList<TreeSet<IDSorter>> topicSortedWords = model.getSortedWords();
// Show top 10 words in topics with proportions for the first document
String topicsoutput="";
for (int topic = 0; topic < numTopics; topic++) {
Iterator<IDSorter> iterator = topicSortedWords.get(topic).iterator();
out = new Formatter(new StringBuilder(), Locale.US);
out.format("%d\t%.3f\t", topic, topicDistribution[topic]);
int rank = 0;
while (iterator.hasNext() && rank < 10) {
IDSorter idCountPair = iterator.next();
out.format("%s (%.0f) ", dataAlphabet.lookupObject(idCountPair.getID()), idCountPair.getWeight());
//out.format("%s ", dataAlphabet.lookupObject(idCountPair.getID()));
rank++;
}
System.out.println(out);
}
// Create a new instance with high probability of topic 0
StringBuilder topicZeroText = new StringBuilder();
Iterator<IDSorter> iterator = topicSortedWords.get(0).iterator();
int rank = 0;
while (iterator.hasNext() && rank < 10) {
IDSorter idCountPair = iterator.next();
topicZeroText.append(dataAlphabet.lookupObject(idCountPair.getID()) + " ");
rank++;
}
// Create a new instance named "test instance" with empty target and source fields.
InstanceList testing = new InstanceList(instances.getPipe());
testing.addThruPipe(new Instance(topicZeroText.toString(), null, "test instance", null));
TopicInferencer inferencer = model.getInferencer();
double[] testProbabilities = inferencer.getSampledDistribution(testing.get(0), 10, 1, 5);
System.out.println("0\t" + testProbabilities[0]);
File pathDir = new File(outputDir + File.separator+ "NumofTopics"+numTopics); //FIXME replace all strings with constants
pathDir.mkdir();
String DirPath = pathDir.getPath();
String stateFile = DirPath+File.separator+"output_state.gz";
String outputDocTopicsFile = DirPath+File.separator+"output_doc_topics.txt";
String topicKeysFile = DirPath+File.separator+"output_topic_keys";
PrintWriter writer=null;
String topicKeysFile_fromProgram = DirPath+File.separator+"output_topic";
try {
writer = new PrintWriter(topicKeysFile_fromProgram, "UTF-8");
writer.print(topicsoutput);
writer.close();
} catch (Exception e) {
e.printStackTrace();
}
model.printTopWords(new File(topicKeysFile), 11, false);
model.printDocumentTopics(new File (outputDocTopicsFile));
model.printState(new File (stateFile));
}
public static void main(String[] args) throws Exception{
// Begin by importing documents from text to feature sequences
ArrayList<Pipe> pipeList = new ArrayList<Pipe>();
// Pipes: lowercase, tokenize, remove stopwords, map to features
pipeList.add( new CharSequenceLowercase() );
pipeList.add( new CharSequence2TokenSequence(Pattern.compile("\\p{L}[\\p{L}\\p{P}]+\\p{L}")) );
pipeList.add( new TokenSequenceRemoveStopwords(new File("H:\\Data\\stoplists\\en.txt"), "UTF-8", false, false, false) );
pipeList.add( new TokenSequence2FeatureSequence() );
InstanceList instances = new InstanceList (new SerialPipes(pipeList));
Reader fileReader = new InputStreamReader(new FileInputStream(new File("E:\\Thesis Data\\DataForLDA\\freshnewData\\cleanTweets.txt")), "UTF-8");
instances.addThruPipe(new CsvIterator (fileReader, Pattern.compile("^(\\S*)[\\s,]*(\\S*)[\\s,]*(.*)$"),
3, 2, 1)); // data, label, name fields
int numberofTopic=5;
int numberofIteration=50;
int numberofThread=6;
String outputDir="J:\\Topics\\";
//int numberofTopic=5;
LDAModel(numberofTopic,numberofIteration,numberofThread,outputDir,instances);
TimeUnit.SECONDS.sleep(30);
numberofTopic=10; }
I have got three files from the above program.
1. state file
2. topic proportion file
3. key topic list
I would like to find out the number of documents allocated per topic.
For example I got the following output from key topic list file
0.004 obama (5471) canada (5283) woman (5152) vote (4879) police(3965)
where first column means topic serial number, second column means topic weight, third column means words under this topic (number of words)
Here, I got number of words under this topic but I would also like to show the number of documents where I got this topic. It would be helpful to show this output as a separate file like this. For example,
Topic 1: doc1(80%) doc2(70%) .......
Could anyone please give some idea or any source code for this?
Thanks.
The information you are looking for is contained in the file "2. topic proportion" you mentioned. Note that every document contains each topic with some percentage (although the percentages may be large for one topic and extremly small for others). You will have to decide what you want to extract from the file: The dominant topic (it is in column 3); The dominant topic, but only when the percentage is at least 50% (sometimes, two topics have almost the same percentage) ...
I'm new to OpenCV and C++ and I'm trying to build a classifier using Gaussian Mixture Model within the OpenCV. I figured out how it works and got it worked ... maybe. I got something like this now:
If I classify the training samples just after the model was trained and saved, I got the result I want. But when I reclassify my training data using the read(), one of the clusters is missing, means I got different cluster result from the same GMM model. I don't get it now because the cluster I want was gone, I can't reproduce the classification again until I retrained the model using the same data. I checked the code in runtime and the result valule in the Vec2d from which predict() returned was never assigned to 1 (I set 3 clusters).
Maybe there's a bug or I did something wrong?
p.s. I'm using 2.4.8 in VS2013
My programs like this:
train part
void GaussianMixtureModel::buildGMM(InputArray _src){
//use source to train GMM and save the model
Mat samples, input = _src.getMat();
createSamples(input, samples);
bool status = em_model.train(samples);
saveModel();
}
save/load the model
FileStorage fs(filename, FileStorage::READ);
if (fs.isOpened()) // if we have file with parameters, read them
{
const FileNode& fn = fs["StatModel.EM"];
em_model.read(fn);
fs.release();
}
FileStorage fs_save(filename, FileStorage::WRITE);
if (fs_save.isOpened()) // if we have file with parameters, read them
{
em_model.write(fs_save);
fs_save.release();
}
predict part
vector<Mat> GaussianMixtureModel::classify(Mat input){
/// samples is a matrix of channels x N elements, each row is a set of feature
Mat samples;
createSamples(input, samples);
for (int k = 0; k < clusterN; k++){
masks[k] = Mat::zeros(input.size(), CV_8UC1);
}
int idx = 0;
for (int i = 0; i < input.rows; i++){
for (int j = 0; j < input.cols; j++){
//process the predicted probability
Mat probs(1, clusterN, CV_64FC1);
Vec2d response = em_model.predict(samples.row(idx++), probs);
int result = cvRound(response[1]);
for (int k = 0; k < clusterN; k++){
if (result == k){
// change to the k-th class's picture
masks[k].at<uchar>(i, j) = 255;
}
...
// something else
}
}
}
}
I suppose my answer will be too late but as I have encountered the same problem the solution I found may be useful for others.
By analysing the source code, I notice that in the case of EM::COV_MAT_DIAGONAL the eigen values of covariances matrix(covsEigenValues in source code) are obtained via SVD after loading the saved data.
However, SVD computes the singular(eigen in our case)values and stores it in ASCENDING order.
To prevent this , I simply extract directly the diagonal element of loaded covariance matrix in covsEigenValues to keep the good order.