I have been looking for a way to convert STK physical models to audiokit polyphonic node. There are examples inside Audiokit repo, but complete process of triggering the notes are getting complex.
If someone provides the easy way to integrate STK model as audiokit node, it will be more helpful.
Related
I am stuck with a project on detecting suspicious facial expression along with detection of foreign objects (e.g. guns, metal rods or anything). I know nothing much of ML or image processing. I need to complete the project as soon as possible. It would be helpful if anyone could direct me with some things.
How can I manage a dataset?
Which type of code should I follow?
How do I present the final system?
I know it is a lot to ask but any amount of help is appreciated.
I have tried to train a machine using transfer learning following this link in in youtube:
https://www.youtube.com/watch?v=avv9GQ3b6Qg\
The tutorial uses mobilenet as the model and a known dataset of 7 subset (Angry, Disgust, Fear, Happy, Neutral, Sad, Surprised). I was able to successfully train the model get the face detected based on these 7 emotions.
How do I further develop it to achieve what I want?
I need advice on which libraries and game engines should I use for a ml project
my goal is to create machine learning model for pruning the trees. I believe I have to create a game with generic tree model with some randomness then create reinforcement learning model and train ml model inside the game.ML model must have ability to first find the branch which must be cut and then find a path to move robotic arm near to that branch to cut it. I have experience in c++ and java but I prefer c++ , could you give me advise which library should I use for ML and which language and game engine should I use for creating game? I have a little experience in opengl. If it doesn't make any difference my prefered language is c++ but I know that I should use right tool for right job and python is leader in ML so if it will save a time and energy I have nothing against learning python.
My recommendation is to learn and use Python for your ML project. Though there is some work in R, for your future in ML, your best bet is to learn and use Python. The community is great, and there are many frameworks that can work out-of-the-box.
After a quick search, I did find a framework called robotframework, that is pretty highly starred on GitHub here: https://github.com/robotframework/robotframework. I will say though, however, that I am not personally familiar with using this framework. But it may be helpful to you.
In terms of tree-based algorithms, you might want to start exploring with XGBoost. It can be found here: https://github.com/dmlc/xgboost.
I am searching for the resources to create a polyphonic dsp node in audiokit 5, so that I can connect and use it with AudioEngine. For c++ dsp, I am using faust.
AudioKit with faust single voice node is working for me by using faust2audiokit (audiokit 5.0.1), but didn't got any success with polyphonic node.
I'm not sure about the DSP nodes, but the AudioKit Oscillators are monophonic. For polyphonic synths they recommend using the DunneAudioKit Synth class. There is a polyphonic oscillator example in the AudioKit Cookbook but it basically is a round-robin Oscillator pool.
I am confused about the DRAKE simulate. Do you need to use the simulate class with a real robot arm?
What does drake simulate do?
Drake's Simulator class is used for advancing time for a Drake System. The System may contain a simulated robot, or may communicate directly with a real robot. Please consider looking through the overview material on the Drake web site and at some of the examples that come with Drake to get the big picture.
Unity provides two RL algorithms to train agents: PPO and SAC.
I have been searching for weeks now on how to write my own algorithms and only found a mention of a gym-unity wrapper that wraps Unity Environments and I could just write my algorithms using Gym. This wrapper has 0 useful documentation so I don't have anywhere to start.
My questions are:
(1) How can I import custom-written RL models into unity?
(2) Is there a better documentation for the wrapper?
You could look at my repository genetic-unity that implements evolutionary algorithms using the ML-Agent package.
I did not use their implemented agents (PPO and SAC) and I just used the interface between Unity and python to code my own algorithms, which is what you're looking for if I understand correctly.
You could start by looking at the genetic_algorithm.py file to see how I handle the Unity environment.
However you should note that this work was made 9 months ago and the ML-Agent framework changes at a fast pace, maybe you will need to adapt a little bit.