I'm working on an air-gapped PC and I'd like to install xgboost so I can use it in a machine learning model. However, to do so I have to move xgboost over on a thumbdrive.
I've tried downloading it to a similar machine and transferring it over, but it seems like the python _ctypes don't get moved over, and I get the following error when I try and use XGBoost:
CDLL.__getitem__(self, name_or_ordinal)
400 def __getitem__(self, name_or_ordinal):
--> 401 func = self._FuncPtr((name_or_ordinal, self))
402 if not isinstance(name_or_ordinal, int):
403 func.__name__ = name_or_ordinal
AttributeError: function 'XGBGetLastError' not found
I don't understand Python's C based backend, so I'm not sure how to fix this without knowing exactly what changes pip made when installing xgboost.
Is there a way to see all the changes pip makes when installing a package?
Related
When I trying to install nltk and download the file punket using nltk.download('punkt').
I am getting the following errors. Have tried many alternative codes and changing networks.
error
Please help with this error.
Post applying :-
= df['num_words'] = df['text'].apply(lambda x:len(nltk.word_tokenize(x)))
I am gettring the error:-
**Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle**
I tried some alternative codes like
import nltk
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
pass
else:
ssl._create_default_https_context = _create_unverified_https_context
nltk.download()
Also tried changing the networks as at some places I found it is saying server issue.
Try to launch the jupyter notebooks session as administrator (open the command or anaconda prompt as administrator).
The last option would be to download the corpus manually. You may find this, helpful in your case.
Running this orange juice sales notebook I get the below error with the .forecast() method.
code
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_predictions, X_trans = fitted_model.forecast(X_test)
Error (full stacktrace):
**AttributeError: 'TimeSeriesImputer' object has no attribute '_known_df'**
This is commonly fixed by upgrading to the latest SDK. You can do this by running pip install --upgrade azureml-sdk[explain,automl].
Thanks,
Sabina
I'm fresh to python/biopython and tried blast using biopython like this:
from Bio.Blast import NCBIWWW
fasta_string = open("C:\\xxxx\\xxxx\\xxxx\\abc.fasta").read()
result_handle = NCBIWWW.qblast("blastp", "nr", fasta_string)
print result_handle.read()
I was wondering if it is possible to run igblast using biopython. I have searched for this but it seems no one is really doing this.
I haven't tried them, but I have found:
pyigblast.
PyIgBlast - Open source Parser to call IgBlast and parse results for
high-throughput sequencing. Uses Python multi-processing to get around
bottlenecks of IgBlast multi-threading. Parses blast output to
deliminated files (csv,json) for uploading to databases. Can connect
directly with mysql and mongo instances to insert directly.
The code uses Python 2.7 and BioPython.
PyIR
Immunoglobulin and T-Cell receptor rearrangement software. A Python wrapper for IgBLAST that scales to allow for the parallel processing of millions of reads on shared memory computers. All output is stored in a convenient JSON format.
The code uses Python 3.6 and BioPython.
Both tools use a locally installed igblastn executable. You can install igblastn locally with conda install igblast.
I'm trying to use the TensorFlow audio recognition model (my_frozen_graph.pb, generated here: https://www.tensorflow.org/tutorials/audio_recognition) on iOS.
But the iOS code NSString* network_path = FilePathForResourceName(#"my_frozen_graph", #"pb"); in the TensorFlow Mobile's tf_simple_example project outputs this error message: Could not create TensorFlow Graph: Not found: Op type not registered 'DecodeWav'.
Anyone knows how I can fix this? Thanks!
I believe you are using the pre-build Tensorflow from Cocapods? It probably does not have that op type, so you should build it yourself from latest source.
From documentation:
While Cocapods is the quickest and easiest way of getting started, you
sometimes need more flexibility to determine which parts of TensorFlow
your app should be shipped with. For such cases, you can build the iOS
libraries from the sources. This guide contains detailed instructions
on how to do that.
This might also be helpful: [iOS] Add optional Selective Registration of Ops #14421
Optimization
The build_all_ios.sh script can take optional
command-line arguments to selectively register only for the operators
used in your graph.
tensorflow/contrib/makefile/build_all_ios.sh -a arm64 -g $HOME/graphs/inception/tensorflow_inception_graph.pb
Please note this
is an aggresive optimization of the operators and the resulting
library may not work with other graphs but will reduce the size of the
final library.
After the build is done you can check /tensorflow/tensorflow/core/framework/ops_to_register.h for operations that were registered. (autogenerated during build with -g flag)
Some progress: having realized the unregistered DecodeWav error is similar to the old familiar DecodeJpeg issue (#2883), I ran strip_unused on the pb as follows:
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=/tf_files/speech_commands_graph.pb \
--output_graph=/tf_files/stripped_speech_commands_graph.pb \
--input_node_names=wav_data,decoded_sample_data \
--output_node_names=labels_softmax \
--input_binary=true
It does get rid of the DecodeWav op in the resulting graph. But running the new stripped graph on iOS now gives me an Op type not registered 'AudioSpectrogram' error.
Also there's no object file audio*.o generated after build_all_ios.sh is done, although AudioSpectrogramOp is specified in tensorflow/core/framework/ops_to_register.h:
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name decode*.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_wav_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name audio*_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$
Just verified that Pete's fix (https://github.com/tensorflow/tensorflow/issues/15921) is good:
add this line tensorflow/core/ops/audio_ops.cc to the file tensorflow/contrib/makefile/tf_op_files.txt and run tensorflow/contrib/makefile/build_all_ios.sh again (compile_ios_tensorflow.sh "-O3" itself used to work for me after adding a line to the tf_op_files.txt, but not anymore with TF 1.4).
Also, use the original model file, don't use the stripped version. Some note was added in the link above.
I have some code that used to work with ipyparallel using a load_balanced_view. I used to be able to:
from ipyparallel import Client
rcAll = Client()
lbvAll = rcAll.load_balanced_view()
for anInpt in allInpt:
lbvAll.apply(doAll, anInpt)
lbvAll.wait()
lbvAll.get_result()
and then
lbvAll.results.values()
would be a list of the results
however, now lbvAll.apply() does not work for me
I can do
result = lbvAll.map_sync(doAll, allInpt)
and result is returned as a list of results.
Using 2-4 cores/engines, there is not much improvement using 4 cores over 2 cores.
My feeling is that ipyparallel has changed but I am not sure and I do not seem to using it correctly. Thanks in advance for any help.
I am using Python 3.4.5, IPython3 5.1.0 and ipyparallel 5.2.0 on linux