Does drake have a pipeline/platform with which I can implement deep reinforcement learning algorithms?
Drake is frequently used as an environment in open-source deep RL frameworks; it should plug in nicely. You can find some examples by searching the web, and would welcome more public examples.
(We will likely make a tutorial for it on the main drake site once we move the tutorials to their own repo so that they can have additional dependencies).
Related
I need advice on which libraries and game engines should I use for a ml project
my goal is to create machine learning model for pruning the trees. I believe I have to create a game with generic tree model with some randomness then create reinforcement learning model and train ml model inside the game.ML model must have ability to first find the branch which must be cut and then find a path to move robotic arm near to that branch to cut it. I have experience in c++ and java but I prefer c++ , could you give me advise which library should I use for ML and which language and game engine should I use for creating game? I have a little experience in opengl. If it doesn't make any difference my prefered language is c++ but I know that I should use right tool for right job and python is leader in ML so if it will save a time and energy I have nothing against learning python.
My recommendation is to learn and use Python for your ML project. Though there is some work in R, for your future in ML, your best bet is to learn and use Python. The community is great, and there are many frameworks that can work out-of-the-box.
After a quick search, I did find a framework called robotframework, that is pretty highly starred on GitHub here: https://github.com/robotframework/robotframework. I will say though, however, that I am not personally familiar with using this framework. But it may be helpful to you.
In terms of tree-based algorithms, you might want to start exploring with XGBoost. It can be found here: https://github.com/dmlc/xgboost.
Unity provides two RL algorithms to train agents: PPO and SAC.
I have been searching for weeks now on how to write my own algorithms and only found a mention of a gym-unity wrapper that wraps Unity Environments and I could just write my algorithms using Gym. This wrapper has 0 useful documentation so I don't have anywhere to start.
My questions are:
(1) How can I import custom-written RL models into unity?
(2) Is there a better documentation for the wrapper?
You could look at my repository genetic-unity that implements evolutionary algorithms using the ML-Agent package.
I did not use their implemented agents (PPO and SAC) and I just used the interface between Unity and python to code my own algorithms, which is what you're looking for if I understand correctly.
You could start by looking at the genetic_algorithm.py file to see how I handle the Unity environment.
However you should note that this work was made 9 months ago and the ML-Agent framework changes at a fast pace, maybe you will need to adapt a little bit.
I have started using Julia.I read that it is faster than C.
So far I have seen some libraries like KNET and Flux, but both are for Deep Learning.
also there is a command "Pycall" tu use Python inside Julia.
But I am interested in Machine Learning too. So I would like to use SVM, Random Forest, KNN, XGBoost, etc but in Julia.
Is there a native library written in Julia for Machine Learning?
Thank you
A lot of algorithms are just plain available using dedicated packages. Like BayesNets.jl
For "classical machine learning" MLJ.jl which is a pure Julia Machine Learning framework, it's written by the Alan Turing Institute with very active development.
For Neural Networks Flux.jl is the way to go in Julia. Also very active, GPU-ready and allow all the exotics combinations that exist in the Julia ecosystem like DiffEqFlux.jl a package that combines Flux.jl and DifferentialEquations.jl.
Just wait for Zygote.jl a source-to-source automatic differentiation package that will be some sort of backend for Flux.jl
Of course, if you're more confident with Python ML tools you still have TensorFlow.jl and ScikitLearn.jl, but OP asked for pure Julia packages and those are just Julia wrappers of Python packages.
Have a look at this kNN implementation and this for XGboost.
There are SVM implementations, but outdated an unmaintained (search for SVM .jl). But, really, think about other algorithms for much better prediction qualities and model construction performance. Have a look at the OLS (orthogonal least squares) and OFR (orthogonal forward regression) algorithm family. You will easily find detailed algorithm descriptions, easy to code in any suitable language. However, there is currently no Julia implementation I am aware of. I found only Matlab implementations and made my own java implementation, some years ago. I have plans to port it to julia, but that has currently no priority and may last some years. Meanwhile - why not coding by yourself? You won't find any other language making it easier to code a prototype and turn it into a highly efficient production algorithm running heavy load on a CUDA enabled GPGPU.
I recommend this quite new publication, to start with: Nonlinear identification using orthogonal forward regression with nested optimal regularization
Are there any machine learning packages that implement spiking neural networks? or any other stand-alone implementations of them that could get me started to work with?
A python library named Brian ought to be useful for you.
There's also what I believe is a programing language named NEURON, but Brian is fairly easy to learn, at least for the basics. It took me a while though to figure out how to do a couple small things, since its a really high level language or whatnot.
There are several other SNN platforms these days that allows you to run classification. I have worked with NeuCube (https://kedri.aut.ac.nz/R-and-D-Systems/neucube) which is a Matlab & Java-based SNN platform.
Also, check out Akida Development Environment (ADE) from Brainchip Inc (https://brainchipinc.com/). One of the best features of ADE is that it's APIs are based on tensorflow/keras structure and also supports CNN2SNN converter to use your deep learning models in SNN domain. SNN models developed using this platform can be deployed on their neuromorphic processor Akida.
I believe there are other platforms such as PyNN and Nengo (compatibility to run models on Loihi) within the SNN domain.
Here are links for brain simulator
https://github.com/brian-team/brian2
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2605403/
http://briansimulator.org/
You can install the Nengo Loihi library for deployment not only of spiking neural networks but also neuromorphic neural networks.
here's the link to their website: https://www.nengo.ai/nengo-loihi/v1.0.0/index.html
You can find on Kaggle an implementation of the ciphar10 dataset, locally loaded, using Nengo Loihi library. Here's the link:
https://www.kaggle.com/migueltoms/neuromorphic-ciphar-10-loihi-comparison-of-results
Why is there a lot of interest in the NLP and ML community for deep learning?
Why do they need approaches to learn complex non-linear relationships?
I guess the most interesteing things about deep learning is the capability of in an unsupervised way you can learn high level features.
Deep learning neural networks have recently have shown very powerful improvements in tasks in computer vision and NLP compared to some other machine learning methods that have been popular for longer.
At least in acoustic modelling for speech recognition, deep learning helps us get better features when compared to MFCCs.
For an in depth look at deep learning and why it is important and interesting, take a look at my article here http://simonwinder.com/2015/01/what-is-deep-learning/ I have been working on this stuff since the 90s and its bizarre to see it take off so suddenly.