Limited number of clients used in federated learning - tensorflow-federated

I just started studying federated learning and want to apply it to a certain dataset, and there are some questions that have risen up.
My data is containing records of 3 categories, each of which is having 3 departments. I am planning to have 3 different federated learning models for each category and treat the three department of this category as the distributed clients.
Is this possible? or building federated learning models requires having thousands of clients?
Thanks

Difficult to say by what you have provided in your question. Usually, when building a federated learning system, you are extending your centralized approach to one with data split/partitioned between segregated clients. Again, depending on the type of data you have, the type of task you are trying to solve and also the amount of data required to solve the task in a centralized approach, these factors along with other ones will depend how many clients you can use and how much data is required at each client. Additionally, the aggregation method that you wish to use combine the parameters from different clients will affect this. I suggest experimenting with different client numbers and partitioning methods and seeing what suits your needs.

Related

Do unsupervised machine learning model features need to be independent?

I'm training an unsupervised machine learning model and want to make sure my features are as useful as possible!
Do unsupervised machine learning model featured need to be independent? For example, I have a feature (subscriptionId) that is the subscription Id of different cloud accounts within a Tenant. I also have a feature that is the resourceId of a resource within the subscription.
However, this resourceId contains the subscriptionId. Is it best practice to combine these features or remove one feature (e.g. subscriptionId) to avoid dependence and duplication among dataset features?
For unsupervised learning, commonly used for clustering, association, or dimensionality reduction, features don't need to be fully independent, but if you have many unique values it's likely that your models can learn to differentiate on these high entropy values instead of learning interesting or significant things as you might hope.
If you're working on generative unsupervised models, for customers, I cannot express how much risk this may create, for security and secret disclosure, for Oracle Cloud Infrastructure (OCI) customers. Generative models are premised on regurgitating their inputs, and thousands of papers have been written on getting private information back out of trained models.
It's not clear what problem you're working on, and the question seems early in its formulation.
I recommend you spend time delving into the limits of statistics and data science, which are the foundation of modern popular machine learning methods.
Once you have an idea of what questions can be answered well by ML, and what can't, then you might consider something like fastAI's course.
https://towardsdatascience.com/the-actual-difference-between-statistics-and-machine-learning-64b49f07ea3
https://www.nature.com/articles/nmeth.4642
Again, depending on how the outputs will be used or who can view or (even indirectly) query the model, it seems unwise to train on private values, especially if you want to generate outputs. ML methods are only useful if you have access to a lot of data, and if you have access to the data of many users, you need to be good steward of Oracle Cloud customer data.

In FL, can clients train different model architectures?

I practice on this tutorial, I would like that each client train a different architecture and different model, Is this possible?
TFF does support different clients having different model architectures.
However, the Federated Learning for Image Classification tutorial uses tff.learning.build_federated_averaging_process which implements the Federated Averaging (McMahan et. al 2017) algorithm, defined as each client receiving the same architecture. This is accomplished in TFF by "mapping" (in the functional programming sense) the model to each client dataset to produce a new model, and then aggregating the result.
To achieve different clients having different architectures, a different federated learning algorithm would need to be implemented. There are couple (non-exhaustive) ways this could be expressed:
Implement an alternative to ClientFedAvg. This method applies a fixed model to the clients dataset. An alternate implementation could potentially create a different architecture per client.
Create a replacement for tff.learning.build_federated_averaging_process
that uses a different function signature, splitting out groups of clients
that would receive different architectures. For example, currently FedAvg
looks like:
(<state#SERVER, data#CLIENTS> → <state#SERVER, metrics#SERVER>
this could be replaced with a method with signature:
(<state#SERVER, data1#CLIENTS, data2#CLIENTS, ...> → <state#SERVER, metrics#SERVER>
This would allow the function to internally tff.federated_map() different model architectures to different client datasets. This would likely only be useful in FL simulations or experimentation and research.
However, in federated learning there will be difficult questions around how to aggregate the models back on the server into a single global model. This probably needs to be designed out first.

transfer knowledge learned from distributed source domains

To resolve the problem of non-iid data in federated learning, I read a paper which add a new node with a different data domain and transfer knowledge from decentralized nodes. My question is what is the information transfered, is that updates or data ?
In layman terms, non-idd means that not all class labels are distributed evenly between clients for training. For obvious reasons, in federated environment it is not feasible for every client to hold and train on idd data. With regards your specific query of how it works in the paper mentioned in your question, you may please share the link of the paper.

How would one implement class weighting for individual federated learning clients?

I am attempting to utilise TensorFlow Federated for an image classification task with 7 classes and 3-5 clients. Each client has a different class distribution of labels. I have successfully implemented this tutorial for my use-case and am now looking for improvements. I have a few questions:
Can individual clients have different class weights in their loss function based on the class distribution that is unique to that client?
If so, how would one implement this?
If not, is it because federated averaging process requires that the clients and the global model share the same loss function?
If i understand your question, I can say yes, individual clients can have different class weight, in this case we talk about non iid data . Suppose that we have 7 labels, each client have data from 1 or 2 labels.

Spam Filtering used by service providers(User customization)

I am learning about the spam filtering techniques implemented by various email service providers. Precisely this is treated as a classification problem and various techniques such as Bayesian, SVM(Support Vector Machines) , KNN etc are used to created a model for classification.
I understood everything till these methodologies. But i got a little confused when i have seen the User Customization For spam filtering in Gmail(we can choose any mail to be spam or non-spam). How exactly they implement this option. Will they create a separate classification model for each user or is there any other option/technique to do this.
I have tried to search it on web but didn't got satisfactory results.
Different people has different preference thus we indeed need separate classification model for each user. For the sake of efficience, we divide users to several groups which have different models.
The most challenging thing is data collecting. The data is often incomplete, error prone and not accessible

Resources