I am using Gluonts for building DeepAR model but takes lot of time to run the training object eventhough I use cox = 'gpu' but throws an error. My machine has GPU but the option didn't work. Any help is much appreciated...
You can check your mxnet current version, I believe ur using a CPU version.
please check the following:
import mxnet as mx
print(f'mxnet version: {mx.__version__}')
print(f'Number of GPUs: {mx.context.num_gpus()}')
it should return number of gpus
Related
My task is to perform inference for face detection using Intel Movidius and Raspberry Pi. The error is that the model only returns "Scores" -> (1, 3000, 2) and not "Boxes".
Steps:
On my local machine, I trained several models(mb1-ssd, mb1-ssd-lite, vgg16-ssd) from the repository https://github.com/qfgaohao/pytorch-ssd and converted them to onnx. Then, using open vino model optimizer from openvinotoolkit = 2020.1, I obtained the '.bin', '.xml' files for each model.
Then, using the obtained files, I performed the infference on the Rasberry Pi and hit the mentioned error.
Note: The inference works using pretrained face detection models from model zoo, the only difference I found looking at the .xml files and my .xml files is that the last layer, "Detection output" is missing. However, when I visualize the .xml file using netron, the conversion seems to be correct.
Link to repo: https://github.com/cocacola0/bsc_thesis
OpenVINO™ 2020.3 release is the last OpenVINO™ version that supports Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2.
Use ssd_mobilenet_v2_coco and ssdlite_mobilenet_v2, alternative models that are available in Open Model Zoo. Both models are working well with your code.
I have enabled the GPU option in accelerator tab but when I try to train my model GPU is not processing my data instead only CPU is processing.
You need to specifically use the packages like https://rapids.ai/ to move data from CPU to GPU. I am assuming you using something like pandas to do data operations.
Seems like a problem in your training code where you might not be properly sending model and input to device and thus ending up only using your CPU. Check out these Kaggle examples on how to use:
TPUs: https://www.kaggle.com/docs/tpu#tpu9
GPUs: https://www.kaggle.com/rj1993/kernel-with-gpu-example
I have created a new tflite model based on MobilenetV2. It works well without quantization using CPU on iOS. I should say that TensorFlow team did a great job, many thanks.
Unfortunately there is a problem with latency. I use iPhone5s to test my model, so I have the following results for CPU:
500ms for MobilenetV2 with 224*224 input image.
250-300ms for MobilenetV2 with 160*160 input image.
I used the following pod 'TensorFlowLite', '~> 1.13.1'
It's not enough, so I have read TF documentation related to optimization (post trainig quantization). I suppose I need to use Float16 or UInt8 quantization and GPU Delegate (see https://www.tensorflow.org/lite/performance/post_training_quantization).
I used Tensorflow v2.1.0 to train and quantize my models.
Float16 quantization of weights (I used MobilenetV2 model after Float16 quantization)
https://github.com/tensorflow/examples/tree/master/lite/examples/image_segmentation/ios
pod 'TensorFlowLiteSwift', '0.0.1-nightly'
No errors, but model doesn’t work
pod 'TensorFlowLiteSwift', '2.1.0'
2020-05-01 21:36:13.578369+0300 TFL Segmentation[6367:330410] Initialized TensorFlow Lite runtime.
2020-05-01 21:36:20.877393+0300 TFL Segmentation[6367:330397] Execution of the command buffer was aborted due to an error during execution. Caused GPU Hang Error (IOAF code 3)
Full integer quantization of weights and activations
pod ‘TensorFlowLiteGpuExperimental’
Code sample: https://github.com/makeml-app/MakeML-Nails/tree/master/Segmentation%20Nails
I used a MobilenetV2 model after uint8 quantization.
GpuDelegateOptions options;
options.allow_precision_loss = true;
options.wait_type = GpuDelegateOptions::WaitType::kActive;
//delegate = NewGpuDelegate(nullptr);
delegate = NewGpuDelegate(&options);
if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk)
Segmentation Live[6411:331887] [DYMTLInitPlatform] platform initialization successful
Loaded model 1resolved reporterDidn't find op for builtin opcode 'PAD' version '2'
Is it possible to use MObilenetV2 quantized model on IOS somehow? Hopefully I did some mistake :) and it's possible.
Best regards,
Dmitriy
This is a link to GITHUB issue with answers: https://github.com/tensorflow/tensorflow/issues/39101
sorry for outdated documentation - the GPU delegate should be included in the TensorFlowLiteSwift 2.1.0. However, looks like you're using C API, so depending on TensorFlowLiteC would be sufficient.
MobileNetV2 do work with TFLite runtime in iOS, and if I recall correctly it doesn't have PAD op. Can you attach your model file? With the information provided it's a bit hard to see what's causing the error. As a sanity check, you can get quant/non-quant version of MobileNetV2 from here: https://www.tensorflow.org/lite/guide/hosted_models
For int8 quantized model - afaik GPU delegate only works for FP32 and (possibly) FP16 inputs.
I am running a large model on tensorflow using Keras and toward the end of the training the jupyter notebook kernel stops and in the command line I have the following error:
2017-08-07 12:18:57.819952: E tensorflow/stream_executor/cuda/cuda_driver.cc:955] failed to alloc 34359738368 bytes on host: CUDA_ERROR_OUT_OF_MEMORY
This I guess is simple enough - I am running out of memory. I have 4 NVIDIA 1080ti GPUs. I know that TF uses only one unless specified. Therefore, I have 2 questions:
Is there a good working example of how to utilise all GPUs in Keras
In Keras, it seems it is possible to change gpu_options.allow_growth=True, but I cannot see exactly how to do this (I understand this is being a help-vampire, but I am completely new to DL on GPUs)
see CUDA_ERROR_OUT_OF_MEMORY in tensorflow
See this Official Keras Blog
Try this:
import keras.backend as K
config = K.tf.ConfigProto()
config.gpu_options.allow_growth = True
session = K.tf.Session(config=config)
Since there is randomness involved in the computation of a random forest classifier, it is necessary to define a random seed to get reproducible results. How does one do this for OpenCV CvRTrees? I do not see such a parameter in CvRTParams.
Update: The API change of OpenCV 3 removed CvRTParams. However, the title question remains.
it depends on the opencv version you are using.
While 2.4.9 seems to use the global cv::theRNG() , where you can just set theRNG().state = something,
This no longer seems to be possible in opencv3.0