I have a python machine learning code and a flutter mobile application code. is there a way to connect between both of them? Also, is there a library in flutter which can apply the concepts of machine learning/ neural networks on texts?
Moreover, what is the best practise/ tools/ platforms to develop a mobile application based on machine learning?
There is currently no way to run Python code within a flutter app. So you'll probably need to interface the two with an API. However, this is gonna require a larger codebase and you'll have to pay for server bandwidth. So it's much more easy to just build out your ML functionality within Flutter.
If you insist on going with Python for your ML:
You'll need to build a RESTful API.
Here are some resources for you to get started on that path.
(1) https://www.codementor.io/sagaragarwal94/building-a-basic-restful-api-in-python-58k02xsiq
(2) https://realpython.com/flask-connexion-rest-api/
There are a lot of different frameworks you can do this with, see (2).
Once you get that up and running here's a tutorial for importing that data into your Flutter app:
(3) https://www.tutorialspoint.com/python_data_science/python_processing_json_data.htm
If you want to build your ML inside of Flutter
This depends on your use case, but consider checking out (4) and using the MLKit for Firebase.
(4) http://flutterdevs.com/blog/machine-learning-in-flutter/
If you want to get into a little bit more into the weeds or you have a more specific use case, see (5).
(5) https://flutterawesome.com/a-machine-learning-app-for-an-age-old-debate/
Good luck!
Related
I'm very new to Ray RLlib and have an issue with using a custom simulator my team made.
We're trying to integrate a custom Python-based simulator into Ray RLlib to do a single-agent DQN training. However, I'm uncertain about how to integrate the simulator into RLlib as an environment.
According to the image below from Ray documentation, it seems like I have two different options:
Standard environment: according to the Carla simulator example, it seems like I can just simply use the gym.Env class API to wrap my custom simulator and register as an environment using ray.tune.registry.register_env function.
External environment: however, the image below and RLlib documentation gave me more confusion since it's suggesting that external simulators that can run independently outside the control of RLlib should be used via the ExternalEnv class.
If anyone can suggest what I should do, it will be very much appreciated! Thanks!
If your environment is indeed can be made to structurized to fit Gym style (init,reset,step functions) you can use first one.
External environment is mostly for RL environments that doesn't fit this style for example Web Browser(test automation etc) based application or any continual finance app etc.
Since you wrote that you work with a custom Python-based simulator, I would say that you can employ PolicyClient and PolicyServerInput API. Implement the PolicyClient on your simulator (env) side and provide the PolicyClient with data from the simulator (observations, rewards etc.). This is what I think may help you.
What is the basic approach for mobile app development and how to choose technologies specifically for iOS?
What I mean is, what should the first step be; for ex.
1.UI development
2.UI testing
3.Backend
and so on?
I would say learn Swift and Xcode for iOS development. This learning leads to both front and back ends aspects, including testings and debugging. It leads also to connecting with third party platforms (remote data base etc).
I'm developing this project where I'm trying to create a distributed version of Tensorflow (the actual open source version is single node) and where the cluster is entirely composed by mobile devices (e.g. smartphones).
In your opinion, what is a possible application or use case where this could be useful? Can you give me some example please?
I know that this is not a "standard" Stack Overflow question, but I didn't know where to post it (if you know a better place where to post it, please let me know it). Thanks so much for your help!
http://www.google.com.hk/search?q=teonsoflow+android
TensorFlow can be used for image identification and there is an example using the camera for Android.
There could be many distributed uses for this. Face recognition, 3D space construction from 2D images.
TensorFlow can be used for a chat bot. I am working towards using it for a personal assistant. The Ai on one phone could communicate with the Ai on other phones.
It could use vision and GPS to 'reserve' a lane for you on the road. Intelligent crowd planned roads and intersections would be safer.
I am also interested in using it for distributed mobile. Please contact me with my user name at gmail or Skype.
https://boinc.berkeley.edu
I think all my answers could run on individual phones with communication between them. If you want them to act like a cluster as #Yaroslav pointed out there is Seti#home and other projects running in the BOINC client.
TensorFlow could be combined with a game engine. You could have a proceduraly generated Ai learning augumented reality game generating the story as multiple players interact with it. I have seen research papers for each of these components.
I have been working Augmented Reality for quite a few months. I have used third party tools like Unity/Vuforia to create augmented reality applications for android.
I would like to create my own framework in which I will create my own AR apps. Can someone guide me to right tutorials/links to achieve my target. On a higher level, my plan is to create an application which can recognize multiple markers and match it with cloud stored models.
That seems like a massive undertaking: model recognition is not an easy task. I recommend looking at OpenCV (which has some standard algorithms you can use as a starting point) and then looking at a good computer vision book (e.g., Richard Szeliski's book or Hartley and Zisserman).
But you are going to run into a host of practical problems. Consider that systems like Vuforia provide camera calibration data for most Android devices, and it's hard to do computer vision without it. Then, of course, there's efficiently managing the whole pipeline which (again) companies like Qualcomm and Metaio invest huge amounts of $$ in.
I'm working on a project that does framemarker tracking and I've started exporting bits of it out to a project I'm calling OpenAR. Right now I'm in the process of pulling out unpublishable pieces and making Vuforia and the OpenCV versions of marker tracking interchangeable. You're certainly welcome to check out the work as it progresses. You can see videos of some of the early work on my YouTube channel.
The hard work is improving performance to be as good as Vuforia.
I'm new to embedded software, I want to build a Image processing application for my AT91SAM9261-EK development board by Atmel. To make it simple i want to use the OpenCV functions, but i'm not sure how am I going to generate a .bim file for flashing on the brd.
Also can anyone you help me understand the flow / software structure for these kind of applications?
Like, will I need Linux or any other OS, if so where does the actual image processing code which i intend to write using opencv sit ?
Till now for simple codes like Basic LCD project, for this board i'm compiling the code using IAR workbench, so if I want to use the same for opencv functions, is there a way ?
Is there any other open source image processing libraries similar to opencv & easy to integrate with IAR or any other ARM compiler ?
Also it would be really useful if there are any links to some learning documents regarding this
Thanks in advance ?
Depending on your application, I think that CPU is not going to be powerful enough to do any kind of image processing; plus the weirdness of working with a foreign system is not going to make your life any easier.
If using this exact CPU is not super important I'd recommend a Beagleboard or Pandaboard, mainly because Ubuntu has installers targeted to the boards and Ubuntu/Debian offers OpenCV packages out of the box, and this is going to remove a whole lot of hurdles if you're new to embedded development -- basically it turns your dev board into a full-featured computer, just plug in a monitor, keyboard and mouse.
The Raspberry Pi looks to be promising in this regard as well, and you certainly can't argue with the price! (You may be able to install Debian on your board and get access to OpenCV packages this way, but I can't vouch for the ease-of-use of this method compared to Ubuntu, which is difficult enough, especially if you're new to Linux).