How to build a simple graph federated learning framework? - machine-learning

Now I want to build a simple graph federated learning framework. At present, I have the following questions:
In addition to FedML, is there any other graph federated learning framework?
How to start building such a framework? I don't konw where to start.
What pre-requisite knowledge is required to complete such a framework?
Is there any article or source code that I can refer to ?
I have tried read some papers about machine learning system but I think they are helpless to my question and
I'm still clueless. I think all I nedd is how to start besides any article or source code I can refer to.

Related

where can I get the detailed tutorial or document for Q# machine learning

Recently, I'm learning the Q# language for machine learning. The sample of half-moons has been run correctly. Now I want to learn the detail of the code. But there is too little explanation to find. There are too many methods I can't understand and there are no introductions in detail. For example, it only explains the name, parameters for the method, but no further information.
I really can't understand it.
So is there an exits detailed document for machine learning for beginners? Thank u very much.
how to get the detained document
Q# machine learning library implements one specific approach, circuit-centric quantum classifiers. You can find the documentation for this approach at https://learn.microsoft.com/en-us/azure/quantum/user-guide/libraries/machine-learning/intro and the subsequent pages in that section. The paper it's based on is 'Circuit-centric quantum classifiers', Maria Schuld, Alex Bocharov, Krysta Svore and Nathan Wiebe.

How to learn and create speech recognition system?

I wanted to create a speech recognition system for Punjabi language for my personal project, i am willing to learn it, read any book even if it takes a year's time.
Can someone experienced here guide me towards the right direction.
It will be a lot of help to me.
I have knowledge of programming and can code if required, but i really don't know from where to start, all i know that is it requires extensive use of Machine Learning, and i am learning tensor flow for it.
For your purpose, it makes sense to use a pre-existing framework. Creating a speech recognition engine from scratch will not be worth your time with all the existing engines/frameworks out there. Years of research have gone into creating many of the frameworks that exist today. Something like Kaldi would be good. Here are two pages you may want to check out:
https://www.quora.com/How-do-I-start-learning-speech-recognition-algorithms
https://www.quora.com/How-do-I-use-KALDI-speech-recognition-toolkit-to-build-our-own-Automatic-Speech-Recognition-System

How to extract the citation metadata from scientific publications/books

I am planning to create an application which involves extraction of citation metadata.By doing this, i'm planning to reduce the human effort taken to search for the cited information,and make it accessible easily(by automating that process).
So far,i have read various scientific papers,and found that FLUX-CiM(by Eli Cortez) has the highest efficiency out of all the existing models.Unfortunately, there is no existing implementation of it anywhere, and no details regarding the its implementation(usage of any libraries/Programming Language/any other algorithm) has been provided in that paper.
So i would like to know,if there is any implementation of it ,or if there is any existing application similar to the FLUX-CiM(i.e.unsupervised learning style,as it is more reliable). I am very new to machine learning, and i would like to know if there is any tutorials/starting point to learn about implementing my application.
I'm not too familiar with the various tools, but AnyStyle seems to have the functionality you're looking for, and is machine-learning based.
Repo: https://github.com/inukshuk/anystyle-parser
Demo: http://anystyle.io/

Moses - Online Integration

We're actually looking to integrate Moses into our localization workflow. Our application is in Java and we're looking at using Moses' functionalities using xml-rpc calls.
Specifically, we're looking at APIs for:
Incremental training (i.e. Avoid having to retrain the model
from scratch every time we wish to use some new training data)
Domain-specific training (i.e. It should maintain separate
phrase tables for each domain that the input data belongs),
Decoding
The tutorial says that these can be achieved via xml-rpc calls. But, I don't find any examples or clear ways to do them. Can someone please provide some examples?
Also, I would like to know if the training and decoding phases can be done in a
distributed manner.
Thanks!
this question is perfectly suitable for moses mailing list:
http://www.statmt.org/moses/?n=Moses.MailingLists
moses server documentation (via xml-rpc):
http://www.statmt.org/moses/?n=Moses.AdvancedFeatures#ntoc28
However, I have better experiences with: moses/contrib/web/bin/daemon.pl which makes server as well, and you communicate via tcp stream.
General examples are harder to find(everyone has different enviroment,...), but make your question more specific and send it to moses mailing list. (e.g. someone had a problem with server installation: http://comments.gmane.org/gmane.comp.nlp.moses.user/7242 )

Open Alternatives to Google Prediction API

A recent announcement by Google about the Google Prediction API sounded very interesting. It could be useful for a project that is coming up, and would probably do a better job than some custom code I was considering.
However, there is some vendor lock-in. Google retain the trained model, and could later choose to overcharge me for it. It occurred to me that there are probably open-source equivalents, if I was willing to host the training myself (I am) and live without their ability to throw hardware at the problem at a moment's notice.
Last time I looked at 3rd Party computer training code was many years ago, and there were a lot of details that needed to be carefully considered and customised for your project. Google appear to have hidden those decisions, and take care of them for you. To me, this is still indistinguishable from magic, but I would like to hear whether others can do the same.
So my question is:
What alternatives to Google Prediction API exist which:
categorise data with supervised machine learning,
can be easily configured (or don't need configuration) for different kinds and scales of data-sets?
are open-source and self-hosted (or at the very least, provide you with a royalty free use of your model, without a dependence on a third party)
Maybe Apache Mahout?
PredictionIO is an open source machine learning server for software developers to create predictive features, such as personalization, recommendation and content discovery.
Have been looking recently at tools like google prediction API, one of the first ones I got put on to was Weka machine learning tool which could be worth checking out for anyone looking.
I'm not sure if it's relevant, but directededge seams to be doing exactly that :)
There is good free for use service Yandex Predictor with 100000/day request quota. It works for text only, supports several languages and spell correction.
You need to get free API Key, then you can use simple RESTful API. Api support JSON, XML and JSONP as output.
Unfortunately I cannot find documentation in English. You can use Google Translate.
I can translate docs if there is some demand.

Resources