How to create a custom environment using OpenAI gym for reinforcement learning - machine-learning

I am a newbie in reinforcement learning working on a college project. The project is related to optimizing x86 hardware power. I am running proprietary software in Linux distribution (16.04). The goal is to use reinforcement learning and optimize the power of the System (keeping the performance degradation of the software as minimum as possible). The proprietary software is a cellular network.
As we already know, the primary functional blocks of Reinforcement learning are Agent and Environment. The basic idea is to use the cellular network running on x86 hardware as the environment for RL. This environment interacts with the agent implementing RL using state, actions, and reward.
From reading different materials, I could understand that I need to make my software as a custom environment from where I can retrieve the state features. The state features are the application layer KPIs like latency, throughput. Action space may include instructions to Linux to change the power (I can use some predefined set of power options). I did not decide about the reward function.
I read this post and decided that I should use OpenAI gym to create my custom environment.
My doubt is that using OpenAI gym for creating custom environments (for these type of setup) is correct. Am I going in the right direction (or) is there any alternative/best tools to create a custom environment. any tutorial or direction to create this custom environment is appreciated.

Related

Multi-agent reinforcement learning with external simulation platform

I have a multi-agent coperative task to solve, for which I am using an external simulation environment created in technomatix plant simulation.
I can communicate with my simulation through com interface and also get the value of my observations and trigger actions and get value of rewards from the simulation.
How can I model my real time environment as PyGym environment?
So that I can use various base code algorithms rather than building it from scratch.
I currently dont find enough information available in the internet.
Thanking in advance.

Other compression methods for Federated Learning

I noticed that the Gradient Quantization compression method is already implemented in TFF framework. How about non-traditional compression methods where we select a sub-model by dropping some parts of the global model? I come across the "Federated Dropout" compression method in the paper "Expanding the Reach of Federated Learning by Reducing Client Resource Requirements" (https://arxiv.org/abs/1812.07210). Any idea if Federated Dropout method is already supported in Tensorflow Federated. If not, any insights how to implement it (the main idea of the method is dropping a fixed percentage of the activations and filters in the global model to exchange and train a smaller sub-model)?
Currently, there is no implementation of this idea available in the TFF code base.
But here is an outline of how you could do it, I recommend to start from examples/simple_fedavg
Modify top-level build_federated_averaging_process to accept two model_fns -- one server_model_fn for the global model, one client_model_fn for the smaller sub-model structure actually trained on clients.
Modify build_server_broadcast_message to extract only the relevant sub-model from the server_state.model_weights. This would be the mapping from server model to client model.
The client_update may actually not need to be changed (I am not 100% sure), as long as only the client_model_fn is provided from client_update_fn.
Modify server_update - the weights_delta will be the update to the client sub-model, so you will need to map it back to the larger global model.
In general, the steps 2. and 4. are tricky, as they depend not only what layers are in a model, but also the how they are connected. So it will be hard to create a easy to use general solution, but it should be ok to write these for a specific model structure you know in advance.
We have several compression schemas implemented in our simulator:
"FL_PyTorch: Optimization Research Simulator for Federated Learning."
https://burlachenkok.github.io/FL_PyTorch-Available-As-Open-Source/
https://github.com/burlachenkok/flpytorch
FL_PyTorch is a suite of open-source software written in python that builds on top of one of the most popular research Deep Learning (DL) frameworks PyTorch. We built FL_PyTorch as a research simulator for FL to enable fast development, prototyping, and experimenting with new and existing FL optimization algorithms. Our system supports abstractions that provide researchers with sufficient flexibility to experiment with existing and novel approaches to advance the state-of-the-art. The work is in proceedings of the 2nd International Workshop on Distributed Machine Learning DistributedML 2021. The paper, presentation, and appendix are available in DistributedML’21 Proceedings (https://dl.acm.org/doi/abs/10.1145/3488659.3493775).

Is there a way to use external, compiled packages for data processing in Google's AI Platform?

I would like to set up a prediction task, but the data preprocessing step requires using tools outside of Python's data science ecosystem, though Python has APIs to work with those tools (e.g. a compiled java NLP tool set). I first thought about creating a Docker container to have an environment with those tools available, but a commentator has said that that is not currently supported. Is there perhaps some other way to make such tools available to the Python prediction class needed for AI Platform? I don't really have a clear sense of what's happening on the backend with AI platform, and how much ability a user has to modify or set that up.
Not possible today. Is there any specific use case you are targeting not satisfied today?
Cloud AI platform offers multiple prediction frameworks (TensorFlow, scikit-learn, XGboost, Pytorch, Custom predictions) in multiple versions.
After looking into the requirements you can use the new AI Platform feature custom prediction, https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
To deploy a custom prediction routine to serve predictions from your trained model, do the following:
Create a custom predictor to handle requests
Package your predictor and your preprocessing module. Here you can install your custom libraries.
Upload your model artifacts and your custom code to Cloud Storage
Deploy your custom prediction routine to AI Platform

A light and accurate classifier which is doable on a device with limited sources

I have a project which I should classify the data coming from several sensors(time series based data) like gyroscope to several classes. I have used several classifiers including SVM, decision tree, neural networks, KNN,... in a batch scenario. My ultimate goal is to find a real-time classifier which is accurate, light and also has the ability to improve itself to implement it on my device which has limited sources(CPU, RAM,..). I was thinking a semi-supervised classifier since I can save a few labeled data on my device and use the future data points to improve my classifier. Does anyone have any recommendation or experience in this regard?
Online learning is very challenging. I recommend you steer away from now and use batch learning. You can always update the model as you update the mobile app or just make the app look for a new updated model on your server every x days.
Now, how to run a machine learning algorithm efficiently on a phone with limited resources. First, you have to identify which platform you are using. I assume you want to get a platform agnostic answer. Most ML algorithms (except lazy learning ones) can run efficiently on smartphone, have a look at this benchmarking experiment.
You have several options here:
iOS: Here's a list of all machine learning libraries available publicly.
Android: Weka for Android, this lib has a huge number of ML algorithms.
Platform agnostic deep learning: Tensorflow, you can export your models to TensorFlow lite (tutorial) and deploy them on any mobile OS and Caffe2 to train deep learning models and export them to any smartphone OS.

How to get a specific machine type for ML Engine online prediction?

Is there an option to request a faster node for online prediction in ML Engine?
For example, when training I can configure any of these machines for my job:
standard,
large_model,
complex_model_s,
complex_model_m,
complex_model_l,
standard_gpu,
complex_model_m_gpu,
complex_model_l_gpu,
standard_p100,
complex_model_m_p100
See description of available clusters and machines for training here and here
I am struggling to find if it is possible to control what kind of machine runs my online prediction.
We are currently adding that capability and will let you know when it's publicly available.
ML Engine offers 4-core instance type in addition to the default serving instance type for online prediction. However the feature is still at alpha stage and it will only be available to a selected list of accounts who opted in as "Trusted Testers". Please contact cloudml-feedback#google.com if you need help to setup prediction service with faster node.

Resources