I am working on some problems about machine learning and want to try the powerful package Keras(using Theano backend) in python. While I am running a demo of MLP for digit recognition here, it gives the follow error message:
Traceback (most recent call last):
File "mlp.py", line 52, in <module>
metrics=['accuracy'])
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 564, in compile
updates=updates, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 459, in function
raise ValueError(msg)
ValueError: Invalid argument 'metrics' passed to K.function
I don't know why it gave the error message, can anyone help me to fix the bug? Thank you in advance.
This error means that you are running Keras version 0 (e.g. 0.3.2) but running code that was written for Keras version 1. You can upgrade to Keras 1, or remove metrics=['accuracy'] from the function call to model.compile().
Which version of Keras are you running?
I updated (e.g., "pip install --upgrade keras"), and that keyword is now accepted.
Take care, however, because several other functions have changed. For example, if you would like to access layer input and output after training, the model method functions have changed.
see http://keras.io/layers/about-keras-layers/
Related
I am learning emeocv with opencv from https://www.mkompf.com/cplus/emeocv.html. I pretty much followed it accurately. My programming environment is :
Ubuntu 14.04
opencv-2.4.8+dfsg1
In the tutorial page mentioned above, when i reach 'main program' section
sudo ./emeocv -i images -l
this command throws an error
OpenCV Error: Bad argument (train data must be floating-point matrix)
in cvCheckTrainData, file
/build/buildd/opencv-2.4.8+dfsg1/modules/ml/src/inner_functions.cpp,
line 857 terminate called after throwing an instance of
'cv::Exception' what():
/build/buildd/opencv-2.4.8+dfsg1/modules/ml/src/inner_functions.cpp:857:
error: (-5) train data must be floating-point matrix in function
cvCheckTrainData
and I am unable to proceed further.
I don't even know where this file "/build/buildd/opencv-2.4.8+dfsg1/modules/ml/src/inner_functions.cpp" exists.
How can i resolve this error, please help.
This happens when you started training mode before but didn't train any data.
Simply delete the empty trainctr.yml and start again with real data.
Source
I train a RandomForestRegressor model on 64bit python.
I pickle the object.
When trying to unpickle the object on 32bit python I get the following error:
'ValueError: Buffer dtype mismatch, expected 'SIZE_t' but got 'long long''
I really have no idea how to fix this, so any help would be hugely appreciated.
Edit: more detail
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\python27\lib\pickle.py", line 1378, in load
return Unpickler(file).load()
File "c:\python27\lib\pickle.py", line 858, in load
dispatch[key](self)
File "c:\python27\lib\pickle.py", line 1133, in load_reduce
value = func(*args)
File "_tree.pyx", line 1282, in sklearn.tree._tree.Tree.__cinit__ (sklearn\tre
e\_tree.c:10389)
This occurs because the random forest code uses different types for indices on 32-bit and 64-bit machines. This can, unfortunately, only be fixed by overhauling the random forests code. Since several scikit-learn devs are working on that anyway, I put it on the todo list.
For now, the training and testing machines need to have the same pointer size.
For ease, please use python 64 bit version to decentralize your model. I faced the same issue recently. after taking that step it was resolved.
So try running it on a 64 bit version. I hope this helps
I fixed this problem with training the model in the same machine. I was training the model on Jupyter Notebook(Windows PC) and trying to load into Raspberry Pi but I got the error. Therefore, I trained the model in Raspberry Pi and maintained again then I fixed the problem.
I had the same problem when I trained the model with python 3.7.0 32bit installed on my system. It got solved after installing the python 3.8.10 64bit version and training the model again.
I have come up with a text recognition algorithm. This algorithm recognizes text in natural images. I am trying to test it against the groundtruth available for the dataset of ICDAR's robust reading challenge. For this, I have generated an xml file containing coordinates of text regions in a scene image, as recognized by my algorithm. A similar xml file is provided for the groundtruth data.
To generate quantitative results of comparison of the two xml files, i am required to use DetEval software (as mentioned in the site). I have installed a command line version on linux.
The problem is: DetEval is not reading the input xml files. Specifically,
I run the following command (As per the instructions on the DetEval website):
rocplot /home/ekta/workspace/extract/result_ICDAR_2011/txt/GT2.xml { /home/ekta/workspace/extract/result_ICDAR_2011/txt/final.xml }
Here, GT2.xml is the groundtruth and final.xml is the file generated by my algorithm.
I get the following error message:
evaldetection -p 0.8,0.4,0.8,0.4,0.4,0.8,0,1 "{" "/home/ekta/workspace/extract/result_ICDAR_2011/txt/GT2.xml" | readdeteval -p 1 - >> /tmp/evaldetectioncurves20130818-21541-1kum9m9-0
evaldetection -p 0.8,0.4,0.8,0.4,0.4,0.8,0,1 "{" "/home/ekta/workspace/extract/result_ICDAR_2011/txt/GT2.xml"I/O warning : failed to load external entity "{"
Couldn't parse document {
-:1: parser error : Document is empty
^
-:1: parser error : Start tag expected, '<' not found
^
I/O error : Invalid seek
Couldn't parse document -
rocplot: ERROR running the command:
evaldetection -p 0.8,0.4,0.8,0.4,0.4,0.8,0,1 "{" "/home/ekta/workspace/extract/result_ICDAR_2011/txt/GT2.xml" | readdeteval -p 1 - >> /tmp/evaldetectioncurves20130818-21541-1kum9m9-0Error code: 256
What do i do? I am positive that there is no error in generating my xml file because even the groundtruth file obtained from the website is not being parsed. Please help!
Regards
Ekta
So, I managed to solve this issue. Turns out I was giving the wrong commands. rocplots is to be used only when I need to have multiple runs on the ground truth and detection files with varying evaluation parameters. See this paper to know more about the parameters involved.
Currently, I have one ground truth file and one detection file and I need to run it using just the default parameters used by DetEval. So, here is what needs to be done:
Go to directory where you have detevalcmd directory and enter detevalcmd directory. Run the following commands in that directory:
./evaldetection /path/to/detection/results/DetectionFilename.xml /path/to/ground/truth/file/GroundTruthFilename.xml > /path/where/you/want/to/store/results/result.xml
This will store the results in result.xml. Next, run the following command:
2. ./readdeteval /path/where/you/stored/results/result.xml.
This will give something like:
**100% of the images contain objects.
Generality: xxx
Inverse-Generality: xxx
<evaluation noImages="xxx">
<icdar2003 r="xxx" p="xxx" hmean="xxx" noGT="XXX" noD="xxx"/>
<score r="Xxx" p="xxx" hmean="xxx" noGT="xxx" noD="xxx"/>
</evaluation>**
So, there you go! you got the recall, precision etc. for you algorithm.
I am a beginner in OpenCV programming. Now I'm trying to develop an eye tracking driven virtual computer mouse using OpenCV python version of lkdemo. I have a code in python lkdemo. I compiled it using python pgmname.py.Then I have the following results.
OpenCV Python version of lkdemo
Traceback (most recent call last):
File "test.py", line 64, in <module>
capture = cvCreateCameraCapture (device)
NameError: name 'cvCreateCameraCapture' is not defined.
Can anyone help to solve this?
Update:
now the error is:
OpenCV Python version of lkdemo
Traceback (most recent call last):
File "test.py", line 8, in <module>
import cv
ImportError: No module named cv
Can anyone suggest a solution?
The API changed a while ago. Depending on your version, it should rather be something like:
import cv
capture = cv.CaptureFromCAM(0)
img = cv.QueryFrame(capture)
HTH.
what is your version OpenCV?
this example for version 2.4.5:
import cv2
import numpy as np
c = cv2.VideoCapture(0)
while(1):
_,f = c.read()
cv2.imshow('e2',f)
if cv2.waitKey(5)==27:
break
cv2.destroyAllWindows()
I am using libsvm for binary classification.. I wanted to try grid.py , as it is said to improve results.. I ran this script for five files in separate terminals , and the script has been running for more than 12 hours..
this is the state of my 5 terminals now :
[root#localhost tools]# python grid.py sarts_nonarts_feat.txt>grid_arts.txt
Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137]
line 2: warning: Cannot contour non grid data. Please use "set dgrid3d".
Warning: empty z range [61.3997:61.3997], adjusting to [60.7857:62.0137]
line 4: warning: Cannot contour non grid data. Please use "set dgrid3d".
[root#localhost tools]# python grid.py sgames_nongames_feat.txt>grid_games.txt
Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326]
line 2: warning: Cannot contour non grid data. Please use "set dgrid3d".
Warning: empty z range [64.5867:64.5867], adjusting to [63.9408:65.2326]
line 4: warning: Cannot contour non grid data. Please use "set dgrid3d".
[root#localhost tools]# python grid.py sref_nonref_feat.txt>grid_ref.txt
Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848]
line 2: warning: Cannot contour non grid data. Please use "set dgrid3d".
Warning: empty z range [62.4602:62.4602], adjusting to [61.8356:63.0848]
line 4: warning: Cannot contour non grid data. Please use "set dgrid3d".
[root#localhost tools]# python grid.py sbiz_nonbiz_feat.txt>grid_biz.txt
Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656]
line 2: warning: Cannot contour non grid data. Please use "set dgrid3d".
Warning: empty z range [67.9762:67.9762], adjusting to [67.2964:68.656]
line 4: warning: Cannot contour non grid data. Please use "set dgrid3d".
[root#localhost tools]# python grid.py snews_nonnews_feat.txt>grid_news.txt
Wrong input format at line 494
Traceback (most recent call last):
File "grid.py", line 223, in run
if rate is None: raise "get no rate"
TypeError: exceptions must be classes or instances, not str
I had redirected the outputs to files , but those files for now contain nothing..
And , the following files were created :
sbiz_nonbiz_feat.txt.out
sbiz_nonbiz_feat.txt.png
sarts_nonarts_feat.txt.out
sarts_nonarts_feat.txt.png
sgames_nongames_feat.txt.out
sgames_nongames_feat.txt.png
sref_nonref_feat.txt.out
sref_nonref_feat.txt.png
snews_nonnews_feat.txt.out (--> is empty )
There's just one line of information in .out files..
the ".png" files are some GNU PLOTS .
But i dont understand what the above GNUplots / warnings convey .. Should i re-run them ?
Can anyone please tell me on how much time this script might take if each input file contains about 144000 lines..
Thanks and regards
Your data is huge, 144 000 lines. So this will take sometime. I used large data such as yours and it took up to a week to finish. If you using images, which I suppose you are, hence the large data, try resizing your image before creating the data. You should get approximately the same results with your images resized.
The libSVM faq speaks to your question:
Q: Why grid.py/easy.py sometimes generates the following warning message?
Warning: empty z range [62.5:62.5], adjusting to [61.875:63.125]
Notice: cannot contour non grid data!
Nothing is wrong and please disregard the message. It is from gnuplot when drawing the contour.
As a side note, you can parallelize your grid.py operations. The libSVM tools directory README file has this to say on the matter:
Parallel grid search
You can conduct a parallel grid search by dispatching jobs to a
cluster of computers which share the same file system. First, you add
machine names in grid.py:
ssh_workers = ["linux1", "linux5", "linux5"]
and then setup your ssh so that the authentication works without
asking a password.
The same machine (e.g., linux5 here) can be listed more than once if
it has multiple CPUs or has more RAM. If the local machine is the
best, you can also enlarge the nr_local_worker. For example:
nr_local_worker = 2
In my Ubuntu 10.04 installation grid.py is actually /usr/bin/svm-grid.py
I guess grid.py is trying to find the optimal value for C (or Nu)?
I don't have an answer for the amount of time it will take, but you might want to try this SVM library, even though it's an R package: svmpath.
As described on that page there, it will compute the entire "regularization path" for a two class SVM classifier in about as much time as it takes to train an SVM using one value of your penalty param C (or Nu).
So, instead of training and doing cross validation for an SVM with a value x for your C parameter, then doing all of that again for value x+1 for C, x+2, etc. You can just train the SVM once, then query its predictive performance for different values of C post-facto, so to speak.
Change:
if rate is None: raise "get no rate"
in line 223 in grid.py to:
if rate is None: raise ValueError("get no rate")
Also, try adding:
gnuplot.write("set dgrid3d\n")
after this line in grid.py:
gnuplot.write("set contour\n")
This should fix your warnings and errors, but I am not sure if it will work, since grid.py seems to think your data has no rate.