How can I solve this problem : tokenizers.BertWordPieceTokenizer error - machine-learning

I'm trying to Train BERT Language Model From Scratch On TPUs (https://www.youtube.com/watch?v=s-3zts7FTDA) but I'm facing this problem :
bwpt = tokenizers.BertWordPieceTokenizer(
vocab_file=None,
add_special_tokens=True,
unk_token='[UNK]',
sep_token='[SEP]',
cls_token='[CLS]',
clean_text=True,
handle_chines_chars=True,
strip_accents=True,
lowercase=True,
wordpieces_prefix='##'
)
after run :
TypeError Traceback (most recent call last)
<ipython-input-27-8eec5eb54376> in <module>
----> 1 bwpt = tokenizers.BertWordPieceTokenizer(
2 vocab_file=None,
3 add_special_tokens=True,
4 unk_token='[UNK]',
5 sep_token='[SEP]',
TypeError: __init__() got an unexpected keyword argument 'vocab_file'
I'm working on my PC, on Jupyter notebook
Tensorflow 2.4.1
Tokenizer 0.10.1
Transformers 4.3.3

Sounds like an API mismatch due to a renaming in BertWordPieceTokenizer. Most likely vocab_file was renamed to vocab.
See: https://github.com/huggingface/tokenizers/blob/ee95e7f0cd0defac6f055d02abd103c40d6c7194/bindings/python/py_src/tokenizers/implementations/bert_wordpiece.py#L14-L27

Related

(-215:Assertion failed) inputs.size() in function 'cv::dnn::dnn4_v20211004::Layer::getMemoryShapes'

I am doing a text extraction on specific regions using Yolov5. I trained a model and convert it to onnx formate for OpenCV readable formate.. but when I load the model weight this error occurs.. all stack, GitHub issues are not able to solve my problem. I download graded my torch version according to Github resolved issues but my issue is still there.
Kindly if anyone has an idea about this error contact me plz.
I am very glad to receive your message.
Best regards. Error is below
[ERROR:0] global D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp (720) cv::dnn::dnn4_v20211004::ONNXImporter::handleNode DNN/ONNX: ERROR during processing node with 1 inputs and 1 outputs: [Identity]:(onnx::Resize_445)
Traceback (most recent call last):
File "C:\Users\Python_Coder\Desktop\YOLO_T_ID\Custom-OCR-with-YOLO\Custom_OCRs.py", line 176, in <module>
net = cv2.dnn.readNet(modelWeights)
cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp:739: error: (-2:Unspecified error) in function 'cv::dnn::dnn4_v20211004::ONNXImporter::handleNode'
> Node [Identity]:(onnx::Resize_445) parse error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\dnn.cpp:5653: error: (-215:Assertion failed) inputs.size() in function 'cv::dnn::dnn4_v20211004::Layer::getMemoryShapes'
OpenCV error with PyTorch model loading

Issue in importing BERTtokenizer module for Q&A with finetuned BERT

I am trying to train the model for question answering with a finetuned Q&A BERT.
import torch
from transformers import BertForQuestionAnswering, BertTokenizer
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
while i trying to use tokenizer for pretraining the bert-large-uncased-whole-word-masking-finetuned-squad model:I am getting the below error.
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-29-d478833618be> in <module>
----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
1 frames
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs)
1857 def _save_pretrained(
1858 self,
-> 1859 save_directory: str,
1860 file_names: Tuple[str],
1861 legacy_format: bool = True,
ModuleNotFoundError: No module named 'transformers.models.auto.configuration_auto'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
--------------------------------------------------------------------------
I am using the new version of transformer only in my notebook. But its giving me this error. Can someone help me with this issue?
Try with:
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
I suspect that you have code from a previous version in your cache. Try
transformers.BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad', cache_dir="./")

Z3-solver throws 'model is not available' exception on python3 w/ parallel_enable options set

I have done a few trials on various z3 python scripts in Linux. In all the cases where I have set the parallel.enable options, it seems the the model is satisfiable after model.check() but when trying to get the model I receive the exception below. This is fairly consistent.
Only differences in trials:
< #z3.set_option("parallel.enable", True)
< #z3.set_option("parallel.threads.max", 16)
Error Message:
Traceback (most recent call last): File xxx, line 1348, in <module> m=s.model()
File "/Python/3.9/3.9.7-20211101/lib/python3.9/site-packages/z3/z3.py", line 7031, in model
raise Z3Exception("model is not available")
z3.z3types.Z3Exception: model is not available

trying to run styleGAN in jyputer notebook, it says "tensorflow' has no attribute 'Dimension"

enter image description here!python encode_images.py --optimizer=lbfgs --face_mask=True --iterations=6 --use_lpips_loss=0 --use_discriminator_loss=0 --output_video=True aligned_images/ generated_images/ latent_representations/
print("\n************ Latent code optimization finished! ***************")
2021-08-24 13:33:11.033451: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
Traceback (most recent call last):
File "encode_images.py", line 12, in
import dnnlib.tflib as tflib
File "C:\Users\bkvij\Office Rapid Innovation\StyleGAN Face Morphing - Arxiv Insights\stylegan-encoder\dnnlib\tflib_init_.py", line 8, in
from . import autosummary
File "C:\Users\bkvij\Office Rapid Innovation\StyleGAN Face Morphing - Arxiv Insights\stylegan-encoder\dnnlib\tflib\autosummary.py", line 31, in
from . import tfutil
File "C:\Users\bkvij\Office Rapid Innovation\StyleGAN Face Morphing - Arxiv Insights\stylegan-encoder\dnnlib\tflib\tfutil.py", line 34, in
def shape_to_list(shape: Iterable[tf.Dimension]) -> List[Union[int, None]]:
AttributeError: module 'tensorflow' has no attribute 'Dimension'
It's because tf.Dimension is deprecated.
Go to stylegan/dnnlib/tflib/tfutil.py and change the tf.Dimension in line 34 to tf.compat.v1.Dimension.
I think you're using TensorFlow v2, use google colape and it will fix the problem for you, otherwise, you will need to make a virtual environment with Python 3.6 TensorFlow 1.10 cuDNN 7.3.1 and it will solve the problem
To expand on Faezeh's answer, you'll have to make the following edits to tfutils.py
From
tf.Dimension (line 34)
tf.variable_scope (line 74)
tf.Session (line 128)
To
tf.compat.v1.Dimension
tf.compat.v1.variable_scope
tf.compat.v1.Session
alternatively, you could just download tensorflow 1.x and save yourself the hassle
you can use it on terminal ' pip install tensorflow-addons==0.14.0

loading Tensorflow model in Opencv 3.4.1 failed

i'm using opencv 3.4.1 DNN in java in order to load a "LeNet" model trained using keras and tensorflow in python. The model is saved as a tensorflow frozen model ".pb" where i'm using the following line of code to load the it:
Dnn cvDnn = new org.opencv.dnn.Dnn();
Net net = cvDnn.readNetFromTensorflow("C:\\Users\\kr\\Desktop\\Plate_Recognition_frozen.pb");
where the Error says:
OpenCV(3.4.1) Error: Unspecified error (Input layer not found: convolution2d_1_b_1) in cv::dnn::experimental_dnn_v4::`anonymous-namespace'::TFImporter::connect, file C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\dnn\src\tensorflow\tf_importer.cpp, line 553
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: OpenCV(3.4.1) C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:553: error: (-2) Input layer not found: convolution2d_1_b_1 in function cv::dnn::experimental_dnn_v4::`anonymous-namespace'::TFImporter::connect
]
at org.opencv.dnn.Dnn.readNetFromTensorflow_1(Native Method)
at org.opencv.dnn.Dnn.readNetFromTensorflow(Dnn.java:163)
at opencv.Main.main(Main.java:44)
Any help would be appreciated, thanks in advance.

Resources