Read hyperparamters from lightgbm.basic.Booster object - machine-learning

How do you read the hyperparameters from an lightgbm.basic.Booster object?
The object is created from file:
model = pickle.load(open(filename, 'rb'))
Stuff like n_estimators, boosting_type, learning_rate is not available from model.dump_model()

Related

Using torchtext for inference

I wonder what is the right way to use torchtext for inference.
Let's assume I've trained the model and dump all Fields with built vocabularies. It seems the next step is to use torchtext.data.Example to load one single example. Somehow I should numeralize it by using loaded Fields and create an Iterator.
I would appreciate any simple examples of using torchtext for inference.
For a trained model and vocabulary (which is part of the text field , you don't have to save the whole class) :
def read_vocab(path):
#read vocabulary pkl
import pickle
pkl_file = open(path, 'rb')
vocab = pickle.load(pkl_file)
pkl_file.close()
return vocab
def load_model_and_vocab():
import torch
import os.path
my_path = os.path.abspath(os.path.dirname(__file__))
vocab_path = os.path.join(my_path, vocab_file)
weights_path = os.path.join(my_path, WEIGHTS)
vocab = read_vocab(vocab_path)
model = classifier(vocab_size=len(vocab))
model.load_state_dict(torch.load(weights_path))
model.eval()
return model, vocab
def predict(model, vocab, sentence):
tokenized = [w.text.lower() for w in nlp(sentence)] # tokenize the sentence
indexed = [vocab.stoi[t] for t in tokenized] # convert to integer sequence
length = [len(indexed)] # compute no. of words
tensor = torch.LongTensor(indexed).to('cpu') # convert to tensor
tensor = tensor.unsqueeze(1).T # reshape in form of batch,no. of words
length_tensor = torch.LongTensor(length) # convert to tensor
prediction = model(tensor, length_tensor) # prediction
return round(1-prediction.item())
"classifier" is the class I defined for my model.
For saving the vocabulary pkl :
def save_vocab(vocab):
import pickle
output = open('vocab.pkl', 'wb')
pickle.dump(vocab, output)
output.close()
And for saving the model after training you can use :
torch.save(model.state_dict(), 'saved_weights.pt')
Tell me if it worked for you!

Integrate the ImageDataGenerator in own customized fit_generator

I want to fit a Siamese CNN with multiple inputs that are stored in my memory and no label (just an arbitrary dummy label). Therefore, I had to write my own data_generator function for using a CNN model in Keras.
My data generator is of the following form
class DataGenerator(keras.utils.Sequence):
def __init__(self, train_data, train_triplets, batch_size=32, dim=(128,128), n_channels=3, shuffle=True):
self.dim = dim
self.batch_size = batch_size
#Added
self.train_data = train_data
self.train_triplets = train_triplets
self.n_channels = n_channels
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
n_row = self.train_triplets.shape[0]
return int(np.floor(n_row / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
#print(index)
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# Find list of IDs
list_IDs_temp = self.train_triplets.iloc[indexes,]
# Generate data
[anchor, positive, negative] = self.__data_generation(list_IDs_temp)
y_train = np.random.randint(2, size=(1,2,self.batch_size)).T
return [anchor,positive, negative], y_train
def on_epoch_end(self):
'Updates indexes after each epoch'
n_row = self.train_triplets.shape[0]
self.indexes = np.arange(n_row)
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples'
# anchor positive and negatives: (n_samples, *dim, n_channels)
# Initialization
anchor = np.zeros((self.batch_size,*self.dim,self.n_channels))
positive = np.zeros((self.batch_size,*self.dim,self.n_channels))
negative = np.zeros((self.batch_size,*self.dim,self.n_channels))
nrow_temp = list_IDs_temp.shape[0]
# Generate data
for i in range(nrow_temp):
list_ind = list_IDs_temp.iloc[i,]
anchor[i] = self.train_data[list_ind[0]]
positive[i] = self.train_data[list_ind[1]]
negative[i] = self.train_data[list_ind[2]]
return [anchor, positive, negative]
where train_data is a list of all images and train triplets a data frame containing image indices to create my inputs containing of a triplet of images.
Now, I want to do some data augmenting for each mini batch supplied to my CNN. I have tried to integrate the ImageDataGenarator of Keras but I couldn't implement it in my code. Is it somehow possible to do it ? I am not very experienced with python and would appreciate any help.
Does this article answer your question?
To put it in a nutshell, Kera's ImageDataGenerator lacks flexibility when it comes to personalized batch generators, and the easiest way to still use data augmentation is simply to switch to another data augmentation tool (like the albumentations library described in the previous article, but you could also use imgaug as well).
I just want to warn you that I encountered several issues with albumentations (that I described in this question on GitHub, but for now I still have had no answers), so maybe using imgaug is a better idea.
Hope this helps, good luck with your model !

How to create CNN model using keras?

I want to create one CNN model including all nSeizures models instead of creating model for each seizure file, but i got this error < AttributeError: 'NoneType' object has no attribute 'fit_generator'>.
for i in range(0, nSeizure):
print(nSeizure)
print('SEIZURE OUT: '+str(i+1))
print('Training start')
## create model
model = createModel()
filesPath=getFilesPathWithoutSeizure(i, indexPat)
## create one model including all nSeizures models
for model in range(0, nSeizure):
mylist.append(model)
data=mylist.append(model)
history=data.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75),
validation_data=generate_arrays_for_training(indexPat, filesPath,
start=75),
#steps_per_epoch=10000, epochs=10)
steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))),
validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),
verbose=2,
epochs=300, max_queue_size=2, shuffle=True, callbacks=[callback])
mylist.append(model) returns None.
when you call data.fit_generator it effectively means None.fit_generator.
Consider rewriting the code.

'transpose expects a vector of size 5. But input(1) is a vector of size 3\n\t " when making inference POST request to tensorflow serving model

I have trained a model and deployed it to tensorflow-serving for inference.
I am getting this error when making a request:
<Response [400]>
{'error': 'transpose expects a vector of size 5. But input(1) is a vector of size 3\n\t [[{{node bidirectional_1/transpose}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _class=["loc:#bidirectional_1/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3"], _output_shapes=[[50,?,512]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embedding_lookup, Attention/transpose/perm)]]'}
The notable difference between this model and the first I deployed that worked without issue is that it contains a Keras custom Layer whereas my successful attempt contained only standard Keras layers.
This is how I am testing the POST request to my tf-serving model:
with open("CNN_last_test_set.pkl", "rb") as fp:
x_arr_test, y_test = pickle.load(fp)
out = x_arr_test[:1, :]
out = out.tolist()
payload = {
"instances": [{'input': [out]}]
}
r = requests.post('http://localhost:9000/v1/models/prod_mod:predict', json=payload)
pred = json.loads(r.content.decode('utf-8'))
To create the tensorflow model object to use with tf-serving I am using this function:
def export_model_custom_layer(filename, export_path_base):
# set the mode to test time.
K.set_learning_phase(0)
model = keras.models.load_model(filename, custom_objects={"Attention": Attention})
sess = K.get_session()
# set the path to save the model and model version
export_version = 1
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(export_version)))
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input': model.input},
outputs={t.name.split(':')[0]: t for t in model.outputs},
legacy_init_op=tf.tables_initializer())
Where I've defined my customer layer as a custom object, in order for this to work I've added this function to my customer layer:
def get_config(self):
config = {
'name': "Attention"
}
base_config = super(Attention, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
When I predict with the model using the same data format as the tf-serving model is receiving using standard keras model.predict(), it works as intended:
class Attention(Layer):...
with open("CNN_last_test_set.pkl", "rb") as fp:
x_arr_test, y_test = pickle.load(fp)
model = keras.models.load_model(r"Data/modelCNN.model", custom_objects={"Attention": Attention})
out = x_arr_test[:1, :]
test1 = out.shape
out = out.tolist()
test = model.predict([out])
>> print(test)
>> [[0.21351092]]
This leads me to believe that the issue is happening either when I export the model from keras to the .pb file or in some way the model is being run in the docker container.
I am not sure what to make of this error but I'm assuming that this is related to my custom layer object considering that it worked with my previous model that only contained standard Keras layers.
Any help is greatly appreciated, thanks!
EDIT: I solved the issue, the problem was that my input data had two extra dimensions than necessary. I realized that when I removed the brackets from around the variable "out" my error changed from being 'transpose expects a vector of size 5' to 'transpose expects a vector of size 4'. So I reshaped my "out" variable from being (1, 50) to (50,) & removed the brackets and the problem resolved itself.

How to make sure the 'fit_generator' in Keras scan the data set for multiple times?

I am trying to construct one LSTM model for classification. And I used the fit_generator to fit the memory. The codes are:
model = Sequential()
model.add(LSTM(data_dim, return_sequences=True,
input_shape=(timesteps, data_dim))) # returns a sequence of vectors of dimension 32
model.add(Dropout(0.5))
model.add(Dense(n_classes)) # return the target value
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics= ['accuracy'])
encoder = load_model('encoder.h5')
model.fit_generator(generate_batches_from_file(path, batchSize, raw_targets, class_weights),
steps_per_epoch=steps_per_epoch,
epochs=scans * n_chunks_train )
And the generator code is like:
def generate_batches_from_file(path, batchSize, raw_targets, class_weights):
while True:
with open(path, 'r') as file_to_read:
do_somthing()
yield something
After processing 'batchSize' data, we will yield something.
My Question is: In my eyes, the 'generate_batches_from_file' generator will read the file to the end only once. At the end of the file, it will break the while loop. If I want to scan the file multiple times, what should I do? Could I set some parameters to achieve this?

Resources