I'm interested in using Alea GPU with a third-party library and am trying to get a sense of my options. Specifically, I'm interested in using this L-BFGS library. I'm fairly new to the F# ecosystem but do have experience with both CUDA and functional programming.
I've been using that L-BFGS library as part of a program which implements logistic regression. It would be neat if I could assume the library correct and write the rest of my code (including that which runs on the GPU) in type-safe F#.
It seems possible to link C++ with F#. Assuming I figure out how to integrate the L-BFGS library into a F# program, would the introduction of Alea GPU cause any issues?
What I am trying to avoid is re-writing L-BFGS in F# using Alea. However, maybe that's actually the easiest path to using F#. If Alea has any facilities for nonlinear optimization, I could probably use those instead.
Alea GPU does not have a nonlinear optimizer yet. The CUDA version has a slightly different implementation than the standard CPU L-BFGS which sometimes causes some accuracy issues. Apart from this I did not face any issues with the code, except that the performance win also significantly depends on the objective function. The objective function for logistic regression is numerically relatively cheap.
We have an internal C# version for this code ported to Alea GPU, which could be also used from F# and we plan to release it in a future version.
Related
I have started using Julia.I read that it is faster than C.
So far I have seen some libraries like KNET and Flux, but both are for Deep Learning.
also there is a command "Pycall" tu use Python inside Julia.
But I am interested in Machine Learning too. So I would like to use SVM, Random Forest, KNN, XGBoost, etc but in Julia.
Is there a native library written in Julia for Machine Learning?
Thank you
A lot of algorithms are just plain available using dedicated packages. Like BayesNets.jl
For "classical machine learning" MLJ.jl which is a pure Julia Machine Learning framework, it's written by the Alan Turing Institute with very active development.
For Neural Networks Flux.jl is the way to go in Julia. Also very active, GPU-ready and allow all the exotics combinations that exist in the Julia ecosystem like DiffEqFlux.jl a package that combines Flux.jl and DifferentialEquations.jl.
Just wait for Zygote.jl a source-to-source automatic differentiation package that will be some sort of backend for Flux.jl
Of course, if you're more confident with Python ML tools you still have TensorFlow.jl and ScikitLearn.jl, but OP asked for pure Julia packages and those are just Julia wrappers of Python packages.
Have a look at this kNN implementation and this for XGboost.
There are SVM implementations, but outdated an unmaintained (search for SVM .jl). But, really, think about other algorithms for much better prediction qualities and model construction performance. Have a look at the OLS (orthogonal least squares) and OFR (orthogonal forward regression) algorithm family. You will easily find detailed algorithm descriptions, easy to code in any suitable language. However, there is currently no Julia implementation I am aware of. I found only Matlab implementations and made my own java implementation, some years ago. I have plans to port it to julia, but that has currently no priority and may last some years. Meanwhile - why not coding by yourself? You won't find any other language making it easier to code a prototype and turn it into a highly efficient production algorithm running heavy load on a CUDA enabled GPGPU.
I recommend this quite new publication, to start with: Nonlinear identification using orthogonal forward regression with nested optimal regularization
I am looking for SIMD math libraries (preferably open source) for SSE and AVX. I mean for example if I have a AVX register v with 8 float values I want sin(v) to return the sin of all eight values at once.
AMD has a propreitery library, LibM http://developer.amd.com/tools/cpu-development/libm/ which has some SIMD math functions but LibM only uses AVX if it detects FMA4 which Intel CPUs don't have. Also I'm not sure it fully uses AVX as all the function names end in s4 (d2) and not s8 (d4). It give better performance than the standard math libraries on Intel CPUs but it's not much better.
Intel has the SVML as part of it's C++ compiler but the compiler suite is very expensive on Windows. Additionally, Intel cripples the library on non-Intel CPUs.
I found the following AVX library, http://software-lisc.fbk.eu/avx_mathfun/, which supports a few math functions (exp, log, sin, cos, and sincos). It gives very fast results for me, faster than SVML, but I have not checked the accuracy. It only works on single floating point and does not work in Visual Studio (though that would be easy to fix). It's based on another SSE library.
Does anyone have any other suggestions?
Edit: I found a SO thread that has many answers on this subject
Vectorized Trig functions in C?
I have implemented Vecmathlib https://bitbucket.org/eschnett/vecmathlib/ as a generic libraries for two other projects (The Einstein Toolkit, and pocl http://pocl.sourceforge.net/). Vecmathlib is open source, and is written in C++.
Gromacs is a highly optimized molecular dynamics software package written in C++ that makes use of SIMD. As far as I know the mathematics SIMD functionality has not yet been split out into a separate library but I guess the implementation might be useful for others nonetheless.
https://github.com/gromacs/gromacs/blob/master/src/gromacs/simd/simd_math.h
http://manual.gromacs.org/documentation/2016.4/doxygen/html-lib/simd__math_8h.xhtml
Does anyone know of a linear algebra library for iOS that uses OpenGL ES 2.0 under the covers?
Specifically, I am looking for a way to do matrix multiplication on arbitrary-sized matrices (e.g., much larger than 4x4, more like 5,000 x 100,000) using the GPUs on iOS devices.
Is there a specific reason you're asking for "uses OpenGL ES 2.0 under the covers?" Or do you just want a fast, hardware optimized linear algebra library such as BLAS, which is built into iOS?
MetalPerformanceShaders.framework provides some tuned BLAS-like functions. It is not GLES. It is metal and runs on the GPU. See MetalPerformanceShaders/MPSMatrixMultiplication.h
OpenGL on iOS is probably the wrong way to go. Metal support on iOS would be the better way to go if you're going GPU.
Metal
You could use Apple's support for Metal Compute shaders. I've written high-performance code for my PhD in it. An early experiment I made calculating some fractals using Metal might give you some ideas to start
Ultimately, this question is too broad. What do you intend to use the library for, or how do you intend to use it? Is it a one off multiplication? Have you tested with current libraries and found the performance to be too slow? If so, by how much?
In general, you can run educational or purely informational experiments on performance of algorithm X on CPU vs. GPU vs. specialized hardware, but most often you run up against Amdahl's law and your code vs. a team of experts in the field.
Accelerate
You can also look into the Accelerate framework which offers BLAS.
Apple, according to the WWDC 2014 talk What's new in the Accelerate Framework, has hand tuned the Linear Algebra libraries targeted at their current generation hardware. They aren't just fast, but energy efficient. There are newer talks as well.
I'm not looking for a Neural Networks library, since I'm creating new kinds of networks. For that I need a good "dataflow" language.
Of course you can do this in C, C++, Java and co. but dealing from scratch with the multithreading etc. would be a nightmare.
At the other extremity, languages like Oz or Erlang seem more adapted, but they don't have many libraries, and they are harder to master (it's easy to play with them, but is it OK to create complete software ?).
What would you suggest ?
I watched an interesting conference presentation about using Erlang for Neural Networks. You might want to check it out:
From Telecom Networks to Neural Networks; Erlang, as the unintentional Neural Network Programming Language
I also know that the presented system is going to be open-sourced any day now according the authors tweet.
Erlang is very well suited for NN.
Neurons can be modeled by processes (no problem with having millions of them)
Connections/synapses can be represented by PIDs of target neuron. It is very easy to initialize such a network as part of standard init procedure in OTP. Communication would be realized by message passing.
Maybe it would be good to have global address space in ETS/mnesia (build in datastores) in order to do dynamic reconfiguration of network structure.
Pattern matching in receive block can determine what kind of signal neuron receives and modify it on the fly.
It would be very easy to monitor such a network.
Also consider that Erlang NN would be 'live' all the time. You would be able to query neurons, layers, routers etc any time.
In C/C++ you just read current state of arrays/data structure.
Regarding performance, we all know that C/C++ is orders of magnitude faster than Erlang,
however NN topic is tricky.
If the network would hold very few neurons, in very wide address space, in regular array,
iterating over it again and again could be costly (in C). Equivalent situation in Erlang would be solved by single query to root/roots (input layer) neurons, which would propagate query directly to well addressed neighborhs.
DXNN1, and DXNN2 which was built and introduced in the textbook: Handbook of Neuroevolution Through Erlang: http://www.amazon.com/Handbook-Neuroevolution-Through-Erlang-Gene/dp/1461444624/ref=zg_bs_760204_22
Are open source and available at: https://github.com/CorticalComputer
If you are interested in data flow programming and multi-threading then I would suggest National Instruments LabVIEW. In this case you don't need to bother about multi-threading since its already there and you can also use OOP since now OOP is also native with LabVIEW. LabVIEW OOP is also purely based on data flow programming paradigm.
If you have any Java experience, then use Scala which is a JVM language that is based on the same concept of "actors" as Erlang. But it is less strict than Erlang and can easily use any existing Java libraries.
Then, when you find a computationally expensive task that would work better in Erlang, you can use Erlang's jinterface library to communicate between your Scala code and your distributed Erlang nodes.
Using Java does not mean dealing from scratch with multithreading - just use one of numerous Java Actor Libraries.
It's not a language in and of itself, but Emergent is very powerful and can be highly customized (it has a full scripting language).
It's open source, too, which could be helpful as a guide if you need to make your own version for your novel architectures.
Why reinvent the wheel? Try PyBrain. It's free and very comprehensive:
Quickstart
Another big plus for Erlang is full integration with Drakon
http://drakon-editor.sourceforge.net/drakon-erlang/intro.html
It all depends on your application. C++, Python are some good programming languages for machine learning
I am working on a personal project with F# and would like to experiment with F# and Markov models. Can anyone recommend a library/sample with source that supports Markov modeling? Since this is a personal project I would prefer something that is free...
I'm not exactly sure about Markov models, but Infer.NET is a great library for doing statistical inference.
Regarding math an F# in general - There was a native F# mathematics library FSharp.MathTools (written in F#), which has been merged with other projects and eventually become Math.NET (which is in C#, but claims to provide a facade for F# developers).
However, I'm not sure if the library has any direct support for Markov modeling (or how difficult would it be to implement that based on what the library provides).