I am new to machine learning and trying to run following code
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.keys()
its working fine on google colab but not working on kaggle. Does anyone know why it not working on kaggle notebook?
This code is running fine on Kaggle. It takes some time but will complete.
Check again
output:
dict_keys(['data', 'target', 'frame', 'categories', 'feature_names', 'target_names', 'DESCR', 'details', 'url'])
Related
I am using Gluonts for building DeepAR model but takes lot of time to run the training object eventhough I use cox = 'gpu' but throws an error. My machine has GPU but the option didn't work. Any help is much appreciated...
You can check your mxnet current version, I believe ur using a CPU version.
please check the following:
import mxnet as mx
print(f'mxnet version: {mx.__version__}')
print(f'Number of GPUs: {mx.context.num_gpus()}')
it should return number of gpus
You know in TensorFlow, we use a callback named ReduceLROnPlateau which reduce our learning rate slightly when our model stops learning. Does anyone know how to do this in XGBoost? I want to know if there is any way to reduce learning when XGBoost model stops learning.
This article claims:
A quick Google search reveals that there has been some work done utilizing a decaying learning rate, one that starts out large and shrinks at each round. But we typically see no accuracy gains in Cross Validation and when looking at test error graphs there are minimal differences between its performance and using a regular old constant.
They wrote a Python package called BetaBoost to find an optimal sequence for the learning rate scheduler.
In principle they seem to use a function which returns learning rates for the LearningRateScheduler
from scipy.stats import beta
def beta_pdf(scalar=1.5,
a=26,
b=1,
scale=80,
loc=-68,
floor=0.01,
n_boosting_rounds=100):
"""
Get the learning rate from the beta PDF
Returns
-------
lrs : list
the resulting learning rates to use.
"""
lrs = [scalar*beta.pdf(i,
a=a,
b=b,
scale=scale,
loc=loc)
+ floor for i in range(n_boosting_rounds)]
return lrs
[...]
xgb.train(
[...],
callbacks=[xgb.callback.LearningRateScheduler(beta_pdf())]
)
I am using sklearn AffinityPropagation clustering algorithm . The output of the clustering algorithm on my 4 core machine is different than what is getting generated on a typical server machine. Can someone suggest any method so that I can get similar output on both the systems.
I am using similar feature vector on both the machine.
Output on my machine is cluster0:[1,2,3],cluster1:[4,5,6] but on server its cluster0:[1,2] cluster1:[3,4],cluster2:[5]
from keras.applications.xception import Xception
from keras.preprocessing import image
from keras.applications.xception import preprocess_input
from keras.models import Model
from sklearn.cluster import AffinityPropagation
import cv2
import glob
base_model = Xception(weights = model_path)
base_model=Model(inputs=base_model.input,outputs=base_model.get_layer('avg_pool').output)
files = glob.glob("*.jpg")
image_vector = []
for f in files:
image = cv2.imread(f)
temp_vector = base_model.predict(image)
image_vector.append(temp_vector)
import numpy as np
image_vector = np.asarray(image_vector)
clustering = AffinityPropagation()
clustering.fit(image_vector)
Packages :-
scikit-learn 0.20.3
sklearn 0.0
tensorflow 1.12.0
keras 2.2.4
opencv-python
Machine 1 :- 4 core 8GB RAM
Machine 2 :- 7 Core 16GB RAM
Results on different machines can be different when running algorithms that are not deterministic.
I suggest that you fix the random seed of numpy and the random seed of Python if you want to be able to reproduce results across machines for such algorithms.
Python random seed can be fixed by using: random.seed(42) (or any other integer)
Numpy random seed can be fixed with: np.random.seed(12345) (or any other integer)
sklearn and Keras use numpy random number generator so the second option by itself could solve your issue.
This answer assumes that all libraries versions are the same on both systems.
I'm an experienced developer, new to Machine Learning. I'm experimenting with Keras/TensorFlow, starting with the mnist_mlp.py example. I installed Keras and TensorFlow using pip on a Mac.
In order to understand the inner workings better, instead of running the file ('python mnist_mlp.py'), I'm cutting and pasting the file contents into a Python (2.7.12) interactive window.
Everything runs fine and I get the 98.4% test accuracy as noted in the comments of that file.
What I want to do next is to feed it novel input and use model.predict() to see how it performs. I create 28x28 images in GIMP and bring them into my Python session (being careful to convert from 4-channel, 8-bit RGBA images to a linear single-channel floating-point array).
When I feed this into the model, I get what look like strange results to me. Some images are correctly categorized while others are wildly off.
They look like perfectly reasonable numbers to me, and they match the MNIST set examples pretty closely. When I extract the array back out and look at it it looks OK, so it doesn't seem to be a flipping or flopping issue. When I feed MNIST images in the same way, they appear to work correctly.
I'm not sure what's going on here. Is it a case of overfitting? Why is the validation data set the same as the test set?
Test images and python code with instructions can be found here:
https://s3.amazonaws.com/stackoverflow-47799896/StackOverflow_47799896.zip
Thanks.
EDIT: I tried the same test with the convnet example (mnist_cnn.py) and got slightly better results but still similar errors. If anyone wants to try that, they can use the same functions in the readme.py file but make these changes:
import numpy as np
x = np.ndarray((1,28,28,1), dtype='float32')
def l (s):
with open(s, 'rb') as fd:
_ = fd.read(1)
for i in xrange(28):
for j in xrange(28):
v = ord(fd.read(1))
x[0][i][j][0] = v / 255.0
_ = fd.read(3)
EDIT 2: Interestingly, if I replace the first 19 items in the training data set (out of 60,000) with my images in the MLP case, I get at or near perfect prediction of all my images after training. Does this suggest overfitting?
I am running a large model on tensorflow using Keras and toward the end of the training the jupyter notebook kernel stops and in the command line I have the following error:
2017-08-07 12:18:57.819952: E tensorflow/stream_executor/cuda/cuda_driver.cc:955] failed to alloc 34359738368 bytes on host: CUDA_ERROR_OUT_OF_MEMORY
This I guess is simple enough - I am running out of memory. I have 4 NVIDIA 1080ti GPUs. I know that TF uses only one unless specified. Therefore, I have 2 questions:
Is there a good working example of how to utilise all GPUs in Keras
In Keras, it seems it is possible to change gpu_options.allow_growth=True, but I cannot see exactly how to do this (I understand this is being a help-vampire, but I am completely new to DL on GPUs)
see CUDA_ERROR_OUT_OF_MEMORY in tensorflow
See this Official Keras Blog
Try this:
import keras.backend as K
config = K.tf.ConfigProto()
config.gpu_options.allow_growth = True
session = K.tf.Session(config=config)