Can I run DeepLearning4J on an AMD GPU? - deeplearning4j

I think DeepLearning4j uses CUDA, which is a NVIDIA thing. I bought this computer for things like neural networks but now I'm disappointed that I have an AMD GPU. Is it somehow possible that I can run DeepLearning4J on AMD?

It's actually not "deeplearning4j" you should be looking at.
Dl4j uses a tensor library which talks to the hardware called nd4j.
Nd4j has "backends" you can plugin. The available backends are various cpu architectures ranging from IBM's power to Android and x86.
GPU wise yes we only support cuda. It's not a simple binary answer like what you're describing.
In theory, we could add opencl at some point but then "which version of opencl"?

No, you cannot use dl4j with AMD GPU

Related

What platform to use for YOLO output when using AMD GPU?

long time tormented by this question, I ask your advice in what direction to move. Objective - to develop universal application with yolo on windows, which can use computing power of AMD/Nvidia/Intel GPU, AMD/Intel CPU (one of the devices will be used). As far as I know, the OpenCV DNN module is leading in CPU computation; a DNN + Cuda bundle is planned for Nvidia graphics cards and a DNN + OpenCL bundle is planned for Intel GPUs. But testing AMD GPU rx580 with DNN + OpenCL, I ran into the following problem: https://github.com/opencv/opencv/issues/17656. Does this module not support AMD GPU computing at all? If so, could you please let me know what platform this is possible on and, preferably, as efficiently as possible. A possible solution might be Tencent's ncnn, but I'm not sure of the performance on the desktop. By output I mean the coordinates of detected objects and their names (in opencv dnn module I got them with cv::dnn::Net::forward()). Also, correct me if I'm wrong somewhere. Any feedback would be appreciated.
I tried the OpenCV DNN + OpenCL module and expected high performance, but this combination does not work.
I believe OpenCV doesn't support AMD for GPU optimization. If you're interested in running DL models on non-Nvidia GPUs, I suggest reading PlaidML, YOLO-OpenCL, DeepCL

Speeding up inference of Keras models

I have a Keras model which is doing inference on a Raspberry Pi (with a camera). The Raspberry Pi has a really slow CPU (1.2.GHz) and no CUDA GPU so the model.predict() stage is taking a long time (~20 seconds). I'm looking for ways to reduce that by as much as possible. I've tried:
Overclocking the CPU (+ 200 MhZ) and got a few extra seconds of performance.
Using float16's instead of float32's.
Reducing the image input size as much as possible.
Is there anything else I can do to increase the speed during inference? Is there a way to simplify a model.h5 and take a drop in accuracy? I've had success with simpler models, but for this project I need to rely on an existing model so I can't train from scratch.
VGG16 / VGG19 architecture is very slow since it has lots of parameters. Check this answer.
Before any other optimization, try to use a simpler network architecture.
Google's MobileNet seems like a good candidate since it's implemented on Keras and it was designed for more limited devices.
If you can't use a different network, you may compress the network with pruning. This blog post specifically do pruning with Keras.
Maybe OpenVINO will help. OpenVINO is an open-source toolkit for network inference, and it optimizes the inference performance by, e.g., graph pruning and fusing some operations. The ARM support is provided by the contrib repository.
Here are the instructions on how to build an ARM plugin to run OpenVINO on Raspberry Pi.
Disclaimer: I work on OpenVINO.

What are the recommendations for building OpenCV 3.0 in presense of respectable number of 3d party components?

When I desided to build opencv library I have found out that there are number of compiler options available that does nearly the same - speed up algorithms. For example: TBB, IPP, CUDA, pthreads, Eigen2/Eigen3, OpenCL and others. Are there any benchmarks or known recommendations of what options are better than others and what caveats should be known?
It depends on your system needs.
CUDA for example is relevant only if you have NVIDIA grpahic card, IPP only if you have the correct intel processor.

SIFT hardware accelerator for smartphones

I'm a fresh graduate electronics engineer and I've an experience on computer vision.I want to ask if it's feasible to make a hardware accelerator of SIFT algorithm - or any other openCV algorithms - to be used on smartphones instead of the current software implementation?
What are the advantages (much low computation, lower power, more complex applications will appear, ...) and the disadvantages(isn't better than the current software implementation, ...)?
Do you have an insight of that?
Thanks
You might be interested to check NEON optimizations - a type of SIMD instructions supported by Nvidia Tegra 3 architectures. Some OpenCV functions are NEON optimized.
Start by reading this nice article Realtime Computer Vision with OpenCV, it has performance comparisons about using NEON, etc.
I also recommend you to start here and here, you will find great insights.
Opencv supports both cuda and (experimentally) opencl
There are specific optimizations for Nvidia's Tegra chipset used in a lot of phones/tablets. I don't know if any phone's use opencl

Is OpenCV 2.0 optimized for AMD processors?

I know that in the past OpenCV was based on IPP and was optimized only for Intel CPUs. Is this still the case with OpenCV 2.0?
History says that OpenCV was originally developed by Intel.
If you check OpenCV faq, they'll say:
OpenCV itself is open source and written in quite portable C/C++, it runs on other processors already and should be fairly easy to port (for example, there are already some CUDA optimizations on NVidia. On the other hand, OpenCV can sometimes run much faster on Intel processors (and sometimes AMD) because it can take advantage of SSE optimizations. OpenCV can be compiled statically with IPP libraries from Intel also which can speed up some function.
I have used it on other processors and different OS and I've always been very happy, including for video processing applications.

Resources