I'm very new to Ray RLlib and have an issue with using a custom simulator my team made.
We're trying to integrate a custom Python-based simulator into Ray RLlib to do a single-agent DQN training. However, I'm uncertain about how to integrate the simulator into RLlib as an environment.
According to the image below from Ray documentation, it seems like I have two different options:
Standard environment: according to the Carla simulator example, it seems like I can just simply use the gym.Env class API to wrap my custom simulator and register as an environment using ray.tune.registry.register_env function.
External environment: however, the image below and RLlib documentation gave me more confusion since it's suggesting that external simulators that can run independently outside the control of RLlib should be used via the ExternalEnv class.
If anyone can suggest what I should do, it will be very much appreciated! Thanks!
If your environment is indeed can be made to structurized to fit Gym style (init,reset,step functions) you can use first one.
External environment is mostly for RL environments that doesn't fit this style for example Web Browser(test automation etc) based application or any continual finance app etc.
Since you wrote that you work with a custom Python-based simulator, I would say that you can employ PolicyClient and PolicyServerInput API. Implement the PolicyClient on your simulator (env) side and provide the PolicyClient with data from the simulator (observations, rewards etc.). This is what I think may help you.
Related
I've been trying to work on a proof of concept (POC) where I can embed a UE4 project into an existing application (in my case NativeScript) but this could just as easily apply to Kotlin or ReactNative.
In the proof of concept I've been able to run the projects on my iPhone launching from UE4 pretty easily by following the Blueprint and C++ tutorials for the FPS. However the next stage of my POC requires that I embed the FPS into an existing NativeScript application, this application will manage the root menu, chat, and store aspects of the platform in the POC.
The struggle I'm running into is that I cannot find how to interact with the xcode project generated from the blueprint tutorial and the C++ tutorial generates a xcode project that i'm unsure where the actual root is that I need to wrap.
Has anyone seen a project doing this before and if so are there any blogs or guidance that you can point me to? I've been Googling and looking around for a couple weeks and have hit a dead end. I found a feedback post here from April of 2020, that was referring to a post in January 2020 that talked about how Unity has a way to embed into other applications additionally a question from 2014 here. But other than that it's a dead end.
A slightly different approach
Disclaimer: I'm not an UE4 developer. Guilty as charged for seeing an unanswered bounty too big to ignore. So I started thinking and looking - and I've found something that could be bent to your needs. Enters pixelstreaming.
Pixelstreaming is a beta feature that is primarily designed to allow for embedding the game into a browser. This opens a two way communication between a server where the GPU heavy computations happen and a browser where the player can interact with the content - the mouseclick & other events are sent back to the server. Apparently it allows some additional neat stuff, however that is not relevant for the question at hand.
Since you want to embedd the Unreal application into your NativeScript tool(menu of some kind if I understood correctly), you could make your application a from two separate parts:
One part would run the server.
The second part would handle the overlay via the pixelstreaming.
This reduces the issue of embedding the UE4 into an application to the(possibly easier) issue of embedding a browser into your application. (Or if your application is browser based - voila, problem solved.)
If you don't want to handle the remote communication, just have the server-side run on the localhost.(With the nice sideeffect of saving bandwidth.)
Alternatively, if you are feeling adventurous, you could go and write your own WebRTC support on the application side to bypass the need for the browser alltogether. It might not be worth the effort though.
Side note: The first of the links you provided is a feature request which hints at the unfortunate fact that UE4 doesn't support embedding. This is further enforced by the fact that one of the people there says somethig along the lines "Unity can to this, it would be nice if UE4 could as well."
Yet a different approach:
You could embedd and use a virtual display to insert the UE4 part into your controller - you would be basically tricking UE4 into thinking that the desired display device is a canvas inside your application.
This thread suggests a similar approach:
In general, the way to connect two libraries like this would be through a platform dependent window handle, e.g. a HWND under Windows. Check the UE api if you find any way to bind the render target to a HWND. Then you could create a wxWindow in wxWidgets and tell UE to render into that window. That would be a first step.
I'm not sure if anything I've listed will be of much help but hey, at least I tried :-). Good luck with your game.
At the same time, the author suggests to:
Reverse the problem:
Using the UE4 slate framework and online subsystem. You would use the former to create the menus you need directly in the UE4 and then use the latter to link to the logic you want to have outside of the UE4. However that is not what you asked for so I'm listing it only for the completeness sake.
I have a python machine learning code and a flutter mobile application code. is there a way to connect between both of them? Also, is there a library in flutter which can apply the concepts of machine learning/ neural networks on texts?
Moreover, what is the best practise/ tools/ platforms to develop a mobile application based on machine learning?
There is currently no way to run Python code within a flutter app. So you'll probably need to interface the two with an API. However, this is gonna require a larger codebase and you'll have to pay for server bandwidth. So it's much more easy to just build out your ML functionality within Flutter.
If you insist on going with Python for your ML:
You'll need to build a RESTful API.
Here are some resources for you to get started on that path.
(1) https://www.codementor.io/sagaragarwal94/building-a-basic-restful-api-in-python-58k02xsiq
(2) https://realpython.com/flask-connexion-rest-api/
There are a lot of different frameworks you can do this with, see (2).
Once you get that up and running here's a tutorial for importing that data into your Flutter app:
(3) https://www.tutorialspoint.com/python_data_science/python_processing_json_data.htm
If you want to build your ML inside of Flutter
This depends on your use case, but consider checking out (4) and using the MLKit for Firebase.
(4) http://flutterdevs.com/blog/machine-learning-in-flutter/
If you want to get into a little bit more into the weeds or you have a more specific use case, see (5).
(5) https://flutterawesome.com/a-machine-learning-app-for-an-age-old-debate/
Good luck!
Let's assume I want to develop an isometric 2D mobile-game such as Clash of Clans for example.
My main target would be iOS but of course Android would be nice, too (but not a must-have).
Now I have to decide to either program with Apples XCode (therefore Swift as a language, which I am already pretty familiar with), or develop my game with Unity3D (and therefore C# as a language, which I am also pretty familiar with).
Personally, I don't prefer one over the other.
So much for the set-up.
As I don't have any preferences, I'd like to choose the one that offers the most benefits for my 2.5D game to me.
The questions:
Is there a difference in getting an approval for the App-Store if you program in Swift, or use Unity; C#?
How big is the difference of the published package-size of the app between Unity and XCode?
Does my Unity-written app run as smoothly as my XCode-written app?
I hope you could help me with that.
If I missed some points there, feel free to criticize me and give me your opinions on it.
Greetings
Chriz
Is there a difference in getting an approval for the App-Store if you program in Swift, or use Unity; C#?
No, given this general comparison - there should be nothing here favoring or disallowing one over the other.
How big is the difference of the published package-size of the app between Unity and Xcode?
That is very hard to say. There will be added libraries for Unity inclusion whereas Apple would already have shared libraries apart of the OS - used by every app. Think shared libraries here - only Apple is permitted to do this. Not to be confused with the to be newly released iOS 9 'App Thinning'.
The larger weight will be media/images/bitmaps.
Does my Unity-written app run as smoothly as my XCode-written app?
Since they both end up using OpenGL, the end result should be the same or very similar. Obviously as the OS and device mature - if Unity doesn't leverage it, they could end up giving up performance advantages.
But... the flip side of being so tightly coupled with Swift/iOS/Apple, is you abandon your Android market - and if you are even considering it - I'd suggest Unity based on what you shared if there is a remote possibility you want to deploy to Android, desktops, *TV devices in the future.
Hi I'm a beginner for Swift language and now i want to learn more about Playgrounds.
And according to the limitations of Playgrounds,
it does not support on-device execution and custom entitlements.
If there is no on-device execution then,
is it kind of just checking how our application works? (or)
is that just gives an overview of how the app looks? on its RHS.
Swift playgrounds are interactive documents where Swift code is compiled and run live as you type. Results of operations are presented in a step-by-step timeline as they execute, and variables can be logged and inspected at any point. Playgrounds can be created within an existing Xcode project or as standalone bundles that run by themselves.
Playgrounds provide a great opportunity to document functions and library interfaces by showing syntax and live execution against real data sets. For the case of the collection functions, we’ve created the CollectionOperations.playground, which contains a list of these functions, all run against sample data that can be changed live.
Playgrounds allow you to edit the code listings and see the result immediately.
Playgrounds do have support for showing UIViews but they are not interactive (with touches). So they are mostly used to test out algorithms and the look of your view.
However, they can also contain compiled code in the sources folder which is much faster than the code in the actual playground itself.
Playgrounds are used for rapid prototyping of code snippets, and see the results in real time. As the name suggests, it’s a great way to “see” what your code is doing. Plus, according to the default comment by Apple, it’s a place to play! You do not need to wait for code to compile or make a new project and set it up just to check a small part of your code.
They can be used for :
Documentation and testing
Prototyping Accelerate — Optimized Signal Processing
Rapid prototyping
Making notebook type document to learn features with live results
Here are a few links for you to check:
Ray Wenderlich
General Details
Apple WWDC 2014
Apple WWDC 2015
I want to create a button with the background. But I want to set using the xml file as same like android have.
Is there a way to achieve this ?
Simply put, you cannot directly port your Android specifics or liberties in XCode/Objective-C. You must learn atleast the basics.
For your specific query, you can use the following answer
Set a button background image iPhone programmatically
Also, I would firmly suggest you start learning Objective-C. There are ample resources out there:
APPLE.
Ray Wenderlich
Cocoa Dev Central
iPhone Dev SDK
Geeky Lemon
and many many many more ...