I've got a network trained and I want to save it and be able to load it later so I don't have to re-train it... duh.
End of training code:
//Save network
SerializeObject.save(new File("encognet"),network);
Encog.getInstance().shutdown();
Loading File
BasicNetwork network = (BasicNetwork) EncogDirectoryPersistence.loadObject(new File("encognet"));
I get this error
Exception in thread "main" org.encog.persist.PersistError: Not a valid
EG file.
Can anyone tell me how to fix this?
I think the problem is that you are not saving the file as an .eg extension. If this is not the problem, I'm not sure about SerializeObject.save, but I know that EncogDirectoryPersistence works for me.
So, test out this code for saving
public static final String FILENAME = "test_load_net.eg";
EncogDirectoryPersistence.saveObject(new File(FILENAME), network);
And then load like this
public static final String FILENAME = "test_load_net.eg";
BasicNetwork network = (BasicNetwork)EncogDirectoryPersistence.loadObject(new File(FILENAME));
Related
My problem will probably be explained better with code.
Consider the snippet below:
// First read
OntModel m1 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m1,uri0);
m1.loadImports();
// Second read (from the same URI)
OntModel m2 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m2,uri0);
m2.loadImports();
where uri0 points to a valid RDF file describing an ontology model with n imports.
and the following custom ReadHook (which has been set in advance):
#Override
public String beforeRead(Model model, String source, OntDocumentManager odm) {
System.out.println("BEFORE READ CALLED: " + source);
}
Global FileManager and OntDocumentManager are used with the following settings:
processImports = true;
caching = true;
If I run the snippet above, the model will be read from uri0 and beforeRead will be invoked exactly n times (once for each import).
However, in the second read, beforeRead won't be invoked even once.
How, and what should I reset in order for Jena to invoke beforeRead in the second read as well?
What I have tried so far:
At first I thought it was due to caching being on, but turning it off or clearing it between the first and second read didn't do anything.
I have also tried removing all ignoredImport records from m1. Nothing changed.
Finally got to solve this. The problem was in ModelFactory.createOntologyModel(). Ultimately, this gets translated to ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_RDFS_INF,null).
All ontology models created with the static OntModelSpec.OWL_MEM_RDFS_INF will have their ImportsModelMaker and some of its other objects shared, which results in a shared state. Apparently, this state has blocked the reading hook to be invoked twice for the same imports.
This can be prevented by creating a custom, independent and non-static OntModelSpec instance and using it when creating an OntModel, for example:
new OntModelSpec( ModelFactory.createMemModelMaker(), new OntDocumentManager(), RDFSRuleReasonerFactory.theInstance(), ProfileRegistry.OWL_LANG );
I am trying to figure out if what I have so far is correct. Just feeling that I am chasing my own tail here, but I do feel that maybe I understand it correctly.
I have this file, where it includes a checksum that is oddly short. This is a set of two files, this one and the one I copied at pastbin under. I noticed that the first file includes the checksum, however it seems that if I change any parameter or the serial number the file will than fail to load.
I am correct to assume that the software looks for some of the xml values add in a string and create the checksum, and than compares it to the checksum in the file? I am not sure how they would have been able to add the checksum in the file itself otherwise, so I guess maybe it just takes some of the values.
Second, am I correct to assume that they just truncated the checksum? I never seen one that small.
In the end I am trying to figure out how to create my own checksum calibration but at the moment I am trying to at minimum understand how it works. Far from braking it :)
<?xml version="1.0" encoding="utf-8"?>
<InstrumentData xmlns:i="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://schemas.datacontract.org/2004/07/AcquisitionEngine.Types">
<CalibrationMode>parabolic</CalibrationMode>
<CalibrationTemperatures
xmlns:d2p1="http://schemas.microsoft.com/2003/10/Serialization/Arrays">
<d2p1:double>28</d2p1:double>
<d2p1:double>52</d2p1:double>
<d2p1:double>76</d2p1:double>
<d2p1:double>90</d2p1:double>
</CalibrationTemperatures>
<Checksum>6E23D45E</Checksum>
<ClusterSize>1</ClusterSize>
<HardwareVariant>Hardware_Legacy</HardwareVariant>
<InstrumentID>undefined</InstrumentID>
<LineFrequency>LineFrequency_50Hz</LineFrequency>
<MaxCalibrationDeviation>0.152</MaxCalibrationDeviation>
<SerialNo>6328ZG200015</SerialNo>
<Version>3.2</Version>
</InstrumentData>
The second file is included here https://pastebin.com/YQ1qKZ2v
Update: I have been able to find the code that generates this but I am still not getting the same hash.
private string GenerateChecksum(string serialno)
{
StringBuilder stringBuilder = new StringBuilder();
stringBuilder.Append(this.HardwareVariant);
stringBuilder.Append(serialno);
return HashGenerator.GetHash32AsHex(stringBuilder.ToString());
}
From this line it seems to take the HadwareVariant and the serial number.
Than it seems to take this and generate a hash32 and than add an X2 at the end of it? My background is php, but am I understand this correct?
internal static class HashGenerator
{
private static HashAlgorithm CryptographicHasher;
static HashGenerator()
{
HashGenerator.CryptographicHasher = MD5.Create();
}
public static int GetHash32(string value)
{
byte[] bytes = Encoding.UTF8.GetBytes(value);
return
BitConverter.ToInt32(HashGenerator.CryptographicHasher.ComputeHash(bytes), 0);
}
public static string GetHash32AsHex(string value)
{
return HashGenerator.GetHash32(value).ToString("X2");
}
}
I am somewhat new to SVMs and object recognition, and am currently attempting to train an SVM using Emgu CV 3.0, save it to a file, and then load it (for use in HOGDescriptor.SetSVMDetector).
However, among other problems, I cannot find a way to load the SVM after I have saved it.
So far, my code basically does the following:
SVM myFirstSVM = new SVM();
// do some stuff, set some parameters...
myFirstSVM.Train(someParameters);
myFirstSVM.Save("filePath");
From here, the problem lies with reloading the SVM after being saved. I have checked several help topics and pages, and the only related things I could find pertained to OpenCV, which used the following method:
SVM mySecondSVM;
mySecondSVM.load("filePath");
However, I could find no method ".load()" in Emgu 3.0, although it appeared to be present in previous versions. Is there an equivalent of this OpenCV method in Emgu 3.0? I would assume there is, and I am sure it is fairly simple, but I cannot for the life of me find it.
For EmguCV 3.0.0, the Save/Load functionality doesn't seem to be supported (Load doesn't exist), you could use Write/Read instead.
A function to save an SVM model:
public static void SaveSVMToFile(SVM model, String path) {
if (File.Exists(path)) File.Delete(path);
FileStorage fs = new FileStorage(path, FileStorage.Mode.Write);
model.Write(fs);
fs.ReleaseAndGetString();
}
A function to load the SVM model provided the correct path:
public static SVM LoadSVMFromFile(String path) {
SVM svm = new SVM();
FileStorage fs = new FileStorage(path, FileStorage.Mode.Read);
svm.Read(fs.GetRoot());
fs.ReleaseAndGetString();
return svm;
}
I have saved and read the SVM model using the specified functions. But I am working with 3.1.0 version and hope it works for you as well:
I have saved the model in an XML file because the read function works on xml file as far as I know:
Emgu.CV.ML.SVM model = new Emgu.CV.ML.SVM();
model.SetKernel(Emgu.CV.ML.SVM.SvmKernelType.Linear);
model.Type = Emgu.CV.ML.SVM.SvmType.CSvc;
model.C = 1;
model.TermCriteria = new MCvTermCriteria(100, 0.00001);
bool trained = model.TrainAuto(my_trainData, 5);
model.Save("SVM_Model.xml");
and I read the model as follows:
Emgu.CV.ML.SVM model_loaded = new Emgu.CV.ML.SVM();
FileStorage fsr = new FileStorage("SVM_Model.xml", FileStorage.Mode.Read);
model_loaded.Read(fsr.GetFirstTopLevelNode());
and it works correctly.
I hope it works for you so.
For EmguCV 1.5.0:
Load Method (fileName):
Inherited from StatModel
Load the statistic model from file
fileName (String)
The file to load the model from
For EmguCV 3.0+:
Load() is not available, as you can see in the source code: https://sourceforge.net/p/emgucv/code/ci/master/tree/Emgu.CV.ML/StatModel.cs
I'm writing a Jenkins plugin and i'm using build.getWorkspace() to get the path to the current workspace. The issue is that this returns a FilePath object.
How can i convert this to a File object?
Although I haven't tried this, according to the javadoc you can obtain the URI from which you can then create a file: File myFile = new File(build.getWorkspace().toURI())
Please use the act function and call your own FileCallable implementation if your plugin should work for master and slaves. For more information check the documentation, chapter "Using FilePath smartly" or this stackoverflow answer.
Code example (source):
void someMethod(FilePath file) {
// make 'file' a fresh empty directory.
file.act(new Freshen());
}
// if 'file' is on a different node, this FileCallable will
// be transferred to that node and executed there.
private static final class Freshen implements FileCallable<Void> {
private static final long serialVersionUID = 1;
#Override public Void invoke(File f, VirtualChannel channel) {
// f and file represent the same thing
f.deleteContents();
f.mkdirs();
return null;
}
}
This approach
File myFile = new File(build.getWorkspace().toURI()) is not the correct solution. I don't know why this has been an accepted anwser till date.
The approach mentioned by Sascha Vetter is correct, taking the reference from official Jenkins javadocs
,which clearly says and I quote
Unlike File, which always implies a file path on the current computer, FilePath represents a file path on a specific agent or the controller.
So being an active contributor to Jenkins community, I would reference the answer given by Sascha Vetter.
PS. Reputation point criteria makes me unable to up-vote the correct answer.
For Testing purposes I'm trying to design a way to verify that the results of statistical tests are identical across versions, platforms and such. There are a lot things that go on that include ints, nums, dates, Strings and more inside our collections of Objects.
In the end I want to 'know' that the whole set of instantiated objects sum to the same value (by just doing something like adding the checkSum of all internal properties).
I can write low level code for each internal value to return a checkSum but I was thinking that perhaps something like this already exists.
Thanks!
_swarmii
This sounds like you should be using the serialization library (install via Pub).
Here's a simple example to get you started:
import 'dart:io';
import 'package:serialization/serialization.dart';
class Address {
String street;
int number;
}
main() {
var address = new Address()
..number = 5
..street = 'Luumut';
var serialization = new Serialization()
..addRuleFor(address);
Map output = serialization.write(address, new SimpleJsonFormat());
print(output);
}
Then depending on what you want to do exactly, I'm sure you can fine tune the code for your purpose.