tf hub.module can't load elmo. I hope it works normally - elmo

When I run the following code with Jupyter Notebook, it originally worked normally, but suddenly an error pops up.
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
tf.disable_eager_execution()
# Load pre trained ELMo model
elmo = hub.Module("https://tfhub.dev/google/elmo/1", trainable=True)
The error code is
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc1 in position 149: invalid start byte
I had the same error before, so when I changed the version of Elmo, the problem was solved, but now it doesn't solve it either.

Related

PSPNet evalution issue

I am working on PSNet and When I reached on third step evaluation and run the code with command ./run.sh I have facing these errors shown in Image and also code is:
Error using importdata
unable to open file.
Error in eval_sub (line 3)
list = importdata(fullfill(data_root,eval_list));
Error in eval_all (line_35)
eval_sub(data_name,data_root,eval_list,model_weight,model_deploy,fea_cha,crop_size_h,crop_size_w,data_class,data_colormar,...
I am working on laptop cpu don't have gpu etc
Please guide me
Regards

Getting error when adding package to Spacy pipe

I'm getting an error while adding a spacy compatible extension, med7, to the pipeline. I've included the replicable code below.
!pip install -U https://med7.s3.eu-west-2.amazonaws.com/en_core_med7_lg.tar.gz
import spacy
import en_core_med7_lg
from spacy.lang.en import English
med7 = en_core_med7_lg.load()
# Create the nlp object
nlp2 = English()
nlp2.add_pipe(med7)
# Process a text
doc = nlp2("This is a sentence.")
The error I get is
Argument 'string' has incorrect type (expected str, got spacy.tokens.doc.Doc)
I realized I was having this problem because I don't understand the difference some components of Spacy. For instance, in the Negex extension package, loading the pipeline is done with the Negex command:
negex = Negex(nlp, ent_types=["PERSON","ORG"])
nlp.add_pipe(negex, last=True)
I don't understand what the difference between Negex and en_core_med7_lg.load(). For some reason, I when add "med7" into the pipeline, it causes this error. I'm new to Spacy and would appreciate an explanation so that I can learn. And please let me know if I can make this question any more clear. Thanks!
med7 is already the loaded pipeline. Run:
doc = med7("This is a sentence.")

TensorflowJS TFJS error: The dtype of dict

Had tried to run https://glitch.com/~tar-understood-exoplanet
and the model would fail to load and I wouldn't be able to use enable the webcam.
Anyone had the same issue?
While the program is running, in the console I get the following:
tfjs:2 Uncaught (in promise) Error: The dtype of dict['image_tensor'] provided in model.execute(dict) must be int32, but was float32
at Object.b [as assert] (tfjs:2)
at tfjs:2
at Array.forEach (<anonymous>)
at t.checkInputShapeAndType (tfjs:2)
at t.<anonymous> (tfjs:2)
at tfjs:2
at Object.next (tfjs:2)
at tfjs:2
at new Promise (<anonymous>)
at Zv (tfjs:2)
I have a Macbook Pro and some other people on their Windows also had some issues running the model. We also tried it on different browsers, Safari and Chrome.
SUCCESS! after switching to coco-ssd 2.0.2:
I added the version 2.0.2 in line 62 as follows:
<script src="https://cdn.jsdelivr.net/npm/#tensorflow-models/coco-ssd#2.0.2"></script>
This is caused by the warmup run of coco-ssd that uses tf.zeros tensor. The default dtype for tf.zeros is 'float' in the recent release of TFJS.
I have put out a new version with fixes. It should work if you use the latest version of coco-ssd (2.0.2) in the glitch example (index.html) as following.
<!-- Load the coco-ssd model to use to recognize things in images -->
<script src="https://cdn.jsdelivr.net/npm/#tensorflow-models/coco-ssd#2.0.2"></script>
Same error here, just occured since Friday night (04/03/2020)
TFModel works well in past few weeks.
I got the same error.
My Scenerio:
I trained a pre-trained model from tensorflow model zoo using transfer learning using tensorflow api as saved model (model.pb file) and converted it into tfjs format (model.json and shared .bin files).
When I tried running this model.json on the javascript(web), it gives the below error:
Uncaught (in promise) Error: The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32
When I tried someone else's working converted model (model.json and shared .bin files) on my javascript(web), it worked.
Conclusion:
There is something wrong with my converted model. I converted it using tensorflowjs_converter. My original model in (model.pb) works accurately in python too.
I'm still trying out to convert my model.pb file with different tensorflowjs_converters as it seems to be the converters versioning issue.

Not able to save pyspark iforest model using pyspark

Using iforest as described here: https://github.com/titicaca/spark-iforest
But model.save() is throwing exception:
Exception:
scala.NotImplementedError: The default jsonEncode only supports string, vector and matrix. org.apache.spark.ml.param.Param must override jsonEncode for java.lang.Double.
Followed the code snippet mentioned under "Python API" section on mentioned git page.
from pyspark.ml.feature import VectorAssembler
import os
import tempfile
from pyspark_iforest.ml.iforest import *
col_1:integer
col_2:integer
col_3:integer
assembler = VectorAssembler(inputCols=in_cols, outputCol="features")
featurized = assembler.transform(df)
iforest = IForest(contamination=0.5, maxDepth=2)
model=iforest.fit(df)
model.save("model_path")
model.save() should be able to save model files.
Below is the output dataframe I'm getting after executing model.transform(df):
col_1:integer
col_2:integer
col_3:integer
features:udt
anomalyScore:double
prediction:double
I have just fixed this issue. It was caused by an incorrect param type. You can checkout the latest codes in the master branch, and try it again.

Runtime error in Opencv text module sample code "webcam_demo"

When I run other samples code of module text , everything is well. But when I tried to run webcam_demo program I got this error:
Error: Illegal min or max specification!
"Fatal error encountered!" == NULL:Error:Assert failed:in file globaloc.cpp, line 75
The debugger breaks execution right before this line:
ocrs.push_back(OCRTesseract::create());
webcam_demo.cpp
Thank you for helping out
I had same problem, when using tesseract with OpenCV.
When I moved initialization before anything else, it started to work.

Resources