I am doing a project of deep learning. After training I save the model as h5. In another file, I load the saved model and use the model to predict. However when I run the code in Pycharm, the model starts training again. I restart my laptop and run again but the same thing still appears. Is pycharm running on wrong file?
model.save('model_10000.h5')
Then in another file
model = load_model('model_10000.h5')
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# predict for test set
pred = model.predict(testX)
this is what i got
You don't need to compile your saved model, maybe that has something to do with it.
model.save('model_10000.h5')
model = load_model('model_10000.h5')
pred = model.predict(testX)
Check this for more detail: https://www.tensorflow.org/tutorials/keras/save_and_load
You need not re-compile the model. your saved model is always compiled and when you load it back, it always return the compiled model (refer Keras FAQ).
So, just remove the model.compile step and you will be good to go.
Related
Is there a way to export/save/load a previously trained autokeras model? I understand I can use the following code to save/load the underlying tensorflow best model:
model = reg.export_model()
model.save(MODEL_FILEPATH, save_format="tf")
best_model = load_model(MODEL_FILEPATH, custom_objects=ak.CUSTOM_OBJECTS)
However, in practice that wouldn't work, since my data has been fitted by autokeras, which takes care of data preparation and scaling. I don't think I have access to what autokeras is doing to the input data (X) before actually fitting, so I can't actually use the exported tensorflow best model to predict labels for new samples with un-prepared and unscaled features.
Am I missing something major here?
Also I noticed that there are some binaries in the autokeras temporary dir. That dir seems to be generated automatically. Is there a way to use that dir to load the previously-fit autokeras "super" model?
Just using import pickle will do the job - https://github.com/keras-team/autokeras/issues/1081#issuecomment-645508111 :
The motivation behind this question is I had saved a Keras model using Matterport's MaskRCNN and in the tf.keras.callbacks.ModelCheckpoint() had very explicitly set the save_weights_only argument to False, so that the entire model would be saved (not just the weights).
Turns out there's a bug in the ModelCheckpoint() callback where it sometimes does not save the full model.
This is obviously a problem when you go to load the model after closing your TF session, as the Graph, architecture, and optimizer state are gone, making it hard (if not impossible) to reload that saved model.
Therefore, I am asking whether it is possible to somehow extract the TF session retroactively, from just the .h5 weights file, after the session has closed (resulting from, for example, your Notebook kernel crashing).
Not much code to go on, but there it is:
Given a .h5 file that was saved after each epoch of training a model in Keras, is it possible to extract the Graph session from that .h5 file, and if so, how?
I have several models saved in .h5 format but never called tf.get_session() during the saving of the model weights in h5 format.
with tf.session() as sess:
how to load this model using Tensorflow
TF 2.0 makes this a cinch, but how to solve this on Tensorflow version 1.14?
The end goal of this is to take a model saved with Keras as a .h5 file and do inference with it on Tensorflow Serving, which needs, to my knowledge, a protobuf file in .pb format.
https://medium.com/#pipidog/how-to-convert-your-keras-models-to-tensorflow-e471400b886a
I've tried keras_to_tensorflow:
https://github.com/amir-abdi/keras_to_tensorflow
The code to convert ModelCheckPoint saved in .h5 format to .pb format is shown below:
import tensorflow as tf
# The export path contains the name and the version of the model
tf.keras.backend.set_learning_phase(0) # Ignore dropout at inference
model = tf.keras.models.load_model('./model.h5')
export_path = './PlanetModel/1'
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors
# And stored with the default serving key
with tf.keras.backend.get_session() as sess:
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input_image': model.input},
outputs={t.name:t for t in model.outputs})
For more information, please refer this article.
For other ways to do it, please refer this Stack Overflow Answer.
I've created a model using google clouds vision api. I spent countless hours labeling data, and trained a model. At the end of almost 20 hours of "training" the model, it's still hit and miss.
How can I iterate on this model? I don't want to lose the "learning" it's done so far.. It works about 3/5 times.
My best guess is that I should loop over the objects again, find where it's wrong, and label accordingly. But I'm not sure of the best method for that. Should I be labeling all images where it "misses" as TEST data images? Are there best practices or resources I can read on this topic?
I'm by no means an expert, but here's what I'd suggest in order of most to least important:
1) Add more data if possible. More data is always a good thing, and helps develop robustness with your network's predictions.
2) Add dropout layers to prevent over-fitting
3) Have a tinker with kernel and bias initialisers
4) [The most relevant answer to your question] Save the training weights of your model and reload them into a new model prior to training.
5) Change up the type of model architecture you're using. Then, have a tinker with epoch numbers, validation splits, loss evaluation formulas, etc.
Hope this helps!
EDIT: More information about number 4
So you can save and load your model weights during or after the model has trained. See here for some more in-depth information about saving.
Broadly, let's cover the basics. I'm assuming you're going through keras but the same applies for tf:
Saving the model after training
Simply call:
model_json = model.to_json()
with open("{Your_Model}.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("{Your_Model}.h5")
print("Saved model to disk")
Loading the model
You can load the model structure from json like so:
from keras.models import model_from_json
json_file = open('{Your_Model.json}', 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
And load the weights if you want to:
model.load_weights('{Your_Weights}.h5', by_name=True)
Then compile the model and you're ready to retrain/predict. by_name for me was essential to re-load the weights back into the same model architecture; leaving this out may cause an error.
Checkpointing the model during training
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath={checkpoint_path},
save_weights_only=True,
verbose=1)
# Train the model with the new callback
model.fit(train_images,
train_labels,
epochs=10,
validation_data=(test_images,test_labels),
callbacks=[cp_callback]) # Pass callback to training
I have same my inception model in Pycharm using TensorFlow library. Every time I run the project, it starts training the Data set. I want to skip the training every time I run model because once the model has been save ,there is no need to train the data again and again. How I get to know my model has been save successfully? How can I apply the save model in same file?
You can save/restore/load your model using TensorFlow:
Save:
builder = tf.saved_model.builder.SavedModelBuilder(export_dir) with tf.Session(graph=tf.Graph()) as sess: ... builder.add_meta_graph_and_variables(sess,
[tag_constants.TRAINING],
signature_def_map=foo_signatures,
assets_collection=foo_assets,
strip_default_attrs=True)
...
builder.save()
Load:
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, [tag_constants.TRAINING], export_dir)
...
For further reference: TensorFlow Guide on Saving a Model
Actually, once you have saved your model, some files will be saved to your directory with the extension .YAML, .h5 or .meta(for graph), you can check the accuracy of model by restoring from saved file, just for sanity check.
There is nice tutorial on this:
https://www.tensorflow.org/guide/saved_model
http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/
If you are use keras-api to build model, then this link will be useful for saving and restoring https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model
I am using the lstm language model in https://github.com/wojzaremba/lstm/blob/master/main.lua
I want to save the model at the end of training for later use. I added the following line at the end of training
torch.save(params.model_file, model)
Which seems to successfully save the model. However, when I try to load that model and test it, I get a very large perplexity. Just for testing, I ran a small training instance, which resulted in a test set perplexity of 134, then saved the model. I then loaded the saved model and applied exactly the same testing method (function run_test) on the same test set, but I got a huge perplexity of 71675.134 (even using random weights gives much lower perplexity than that!). I tried saving and loading only the weights, converting them to float() before saving, or saving them as cudaTensors, and all gave me the same result.
Here is the code for loading and testing after saving the whole model; I only modified the main method from the original main.lua:
local function main()
g_init_gpu(arg)
print('loading model from file ' .. params.model_file)
model=torch.load(params.model_file)
state_test = {data=transfer_data(ptb.testdataset(params.batch_size))}
reset_state(state_test)
run_test()
end