I was using PyCharm and imported ResNet50 for image recognition of a sample image. When I run the code , the following error occured.
I was learning using an online code which needed to be completed by learners. I configured PyCharm and installed required packages that were recommended. During learning image recognition using ResNet50 , while running the code I ended up with following error. Should I custom install ResNet50 on pycharm for this to work? The instructor said the IDE will auto download ResNet50 during execution of code. Attaching python code below.
import numpy as np
from keras.preprocessing import image
from keras.applications import resnet50
model = resnet50.ResNet50
img = image.load_img("bay.jpg", target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = resnet50.preprocess_input(x)
predictions = model.predict(x)
predicted_classes = resnet50.decode_predictions(predictions, top=9)
print("This is an image of:")
for imagenet_id, name, likelihood in predicted_classes[0]:
print(" - {}: {:2f} likelihood".format(name, likelihood))
This is the resultant error that I am getting during execution.
File "/home/warlock/Downloads/Ex_Files_Building_Deep_Learning_Apps/
Exercise Files/05/image_recognition.py", line 21, in <module>
predictions = model.predict(x)
AttributeError: 'function' object has no attribute 'predict'
You have this error because ResNet50 is a fonction so you need to implement it like a fonction :
model = resnet50.ResNet50()
In order to have a resnet50 model with all default parameters
Related
i have KNN model pickled with StandartScaler.
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
when im trying to load model and pass new values via StandartScaler().transform() it give me an error:
sklearn.exceptions.NotFittedError: This StandardScaler instance is not
fitted yet. Call 'fit' with appropriate arguments before using this estimator.
im trying to load values from dictionary
dic = {'a':1, 'b':32323, 'c':12}
sc = StandartScaler()
load = pickle.load(open('KNN.mod'), 'rb'))
load.predict(sc.transform([[dic['a'], dic['b'], dic['c']]]))
as far i understand from error i have to fit new data to sc. but if do so it gives me wrong predictions. im not sure my im overfiting or smth, do random forest and decision tree works fine with that data without sc. logistic regresion semi ok
You need to train and pickle the entire machine learning pipeline at the same time. This can be done with the Pipeline tool from sklearn. In your case it will look like:
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.neighbors import NearestNeighbors
pipeline = Pipeline([('scaler', StandardScaler()), ('knn', NearestNeighbors())])
pipe.fit(X_train, y_train)
# save the ml pipeline
pickle.dump(pipeline, open('KNN_pipeline.pkl'), 'wb'))
# load the ml pipeline and do prediction
pipeline = pickle.load(open('KNN_pipeline.pkl'), 'rb'))
pipeline.predict(X_test)
I am trying to run the code
import data_processing as dp
import numpy as np
test_set = dp.read_data("./data2019-12-01.csv")
import tensorflow as tf
import keras
def train_model():
autoencoder = keras.Sequential([
keras.layers.Flatten(input_shape=[400]),
keras.layers.Dense(150,name='bottleneck'),
keras.layers.Dense(400,activation='sigmoid')
])
autoencoder.compile(optimizer='adam',loss='mse')
return autoencoder
trained_model=train_model()
trained_model.load_weights('./weightsfile.h5')
trained_model.evaluate(test_set,test_set)
The test_set in line 3 is of numpy array of shape (3280977,400). I am using keras 2.1.4 and tensorflow 1.5.
However, this puts out the following error
ValueError: Input 0 is incompatible with layer flatten_1: expected min_ndim=3, found ndim=2
How can I solve it? I tried changing the input_shape in flatten layer and also searched on the internet for possible solutions but none of them worked out. Can anyone help me out here? Thanks
After much trial and error, I was able to run the code. This is the code which runs:-
import data_processing as dp
import numpy as np
test_set = np.array(dp.read_data("./datanew.csv"))
print(np.shape(test_set))
import tensorflow as tf
from tensorflow import keras
# import keras
def train_model():
autoencoder = keras.Sequential([
keras.layers.Flatten(input_shape=[400]),
keras.layers.Dense(150,name='bottleneck'),
keras.layers.Dense(400,activation='sigmoid')
])
autoencoder.compile(optimizer='adam',loss='mse')
return autoencoder
trained_model=train_model()
trained_model.load_weights('./weightsfile.h5')
trained_model.evaluate(test_set,test_set)
The change I made is I replaced
import keras
with
from tensorflow import keras
This may work for others also, who are using old versions of tensorflow and keras. I used tensorflow 1.5 and keras 2.1.4 in my code.
Keras and TensorFlow only accept batch input data for prediction.
You must 'simulate' the batch index dimension.
For example, if your data is of shape (M x N), you need to feed at the prediction step a tensor of form (K x M x N), where K is the batch_dimension.
Simulating the batch axis is very easy, you can use numpy to achieve that:
Using: np.expand_dims(axis = 0), for an input tensor of shape M x N, you now have the shape 1 x M x N. This why you get that error, that missing '1' or 'K', the third dimension is that batch_index.
I'm trying to do some experiments on the Omniglot dataset, and I saw that Pytorch implemented it. I've run the command
from torchvision.datasets import Omniglot
but I have no idea on how to actually load the dataset. Is there a way to open it equivalent to how we open MNIST? Something like the following:
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
The final goal is to be able to open training and test set separately and run experiments on it.
You can do exact same transformations as Omniglot contains images and labels just like MNIST, for example:
import torchvision
dataset = torchvision.datasets.Omniglot(
root="./data", download=True, transform=torchvision.transforms.ToTensor()
)
image, label = dataset[0]
print(type(image)) # torch.Tensor
print(type(label)) # int
Instead of train and test , Omniglot dataset uses background and evaluation terminology instead.
background_set = datasets.Omniglot(root='./data', background=True, download=True,
transform=transforms.ToTensor())
I am using sklearn AffinityPropagation clustering algorithm . The output of the clustering algorithm on my 4 core machine is different than what is getting generated on a typical server machine. Can someone suggest any method so that I can get similar output on both the systems.
I am using similar feature vector on both the machine.
Output on my machine is cluster0:[1,2,3],cluster1:[4,5,6] but on server its cluster0:[1,2] cluster1:[3,4],cluster2:[5]
from keras.applications.xception import Xception
from keras.preprocessing import image
from keras.applications.xception import preprocess_input
from keras.models import Model
from sklearn.cluster import AffinityPropagation
import cv2
import glob
base_model = Xception(weights = model_path)
base_model=Model(inputs=base_model.input,outputs=base_model.get_layer('avg_pool').output)
files = glob.glob("*.jpg")
image_vector = []
for f in files:
image = cv2.imread(f)
temp_vector = base_model.predict(image)
image_vector.append(temp_vector)
import numpy as np
image_vector = np.asarray(image_vector)
clustering = AffinityPropagation()
clustering.fit(image_vector)
Packages :-
scikit-learn 0.20.3
sklearn 0.0
tensorflow 1.12.0
keras 2.2.4
opencv-python
Machine 1 :- 4 core 8GB RAM
Machine 2 :- 7 Core 16GB RAM
Results on different machines can be different when running algorithms that are not deterministic.
I suggest that you fix the random seed of numpy and the random seed of Python if you want to be able to reproduce results across machines for such algorithms.
Python random seed can be fixed by using: random.seed(42) (or any other integer)
Numpy random seed can be fixed with: np.random.seed(12345) (or any other integer)
sklearn and Keras use numpy random number generator so the second option by itself could solve your issue.
This answer assumes that all libraries versions are the same on both systems.
I am an ML beginner and simply implementing inception-v3 using the ImageNet weights. This is my first run at it. My implementation is in Keras. My predictions are all wrong and I need a little foot up, to see what the problem is. It is actaully pretty difficult to find an example of inception-v3 used from top to bottom using Keras online. Most are tutorials on transfer learning. Here is my code.
import keras as k
from keras.applications.inception_v3 import InceptionV3
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.preprocessing import image
import cv2
import numpy as np
model = k.applications.inception_v3.InceptionV3(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
im = 'images/cat.jpg'
cv2.imread(im).shape
(168, 299, 3)
im = cv2.resize(cv2.imread(im), (299, 299)).astype(np.float32)
im = np.expand_dims(im, axis=0)
im.shape
(1, 299, 299, 3)
preds = model.predict(im)
print('Predicted:', decode_predictions(preds
Predicted: [[('n03047690', 'clog', 1.0), ('n01924916', 'flatworm', 7.0789714e-11), ('n03950228', 'pitcher', 2.1705252e-11), ('n02841315', 'binoculars', 4.1622389e-13), ('n06359193', 'web_site', 3.8697981e-16)]]
Could someone suggest how this most basic of implementations is wrong. Perhaps my input shape is incorrect?
The inception-v3 model requires that you run your image through preprocess_input() before predicting.
add:
im=preprocess_input(im)
Also, you should import preprocess_input from keras.applications.inception_v3 and not from keras.applications.imagenet_utils