Rendering takes a lot of computational power in many cases. Thus, it can happen that frames are being skipped as the UI needs to wait for the rendering to finish.
I was wondering how I could implement rendering or painting asynchronously, i.e. isolated.
Functions like compute are not an option because SendPort only works with primitive types and painting will require a PaintingContext or Canvas.
There are Flutter plugins that require heavy rendering. Hence, I thought that I could find answers in the video_player plugin, but it uses Texture, i.e. does not render in Dart.
I am wondering if there are any idioms or example implementations regarding this.
I you are wondering how I implement rendering, you can take a look at FlareActor. It turns out that they handle painting exactly like I do. Now I am wondering why I am running into bottlenecks and Flare is not. Are there any guides on optimizing painting?
I solved it by writing all my required pixels to the BMP file format and then using Canvas.drawImage instead as the Flutter canvas cannot handle many Canvas operations: https://stackoverflow.com/a/55855735/6509751
Related
I want to use Electron as a debug overlay for a Vulkan Render Engine im building. Since i have a lot of requirements on this debug tool writing one in engine myself would take way too long. I would like to use electron instead of Qt or similar since i feel its a lot more powerful and flexible with less effort (once its working).
The problem is now that i somehow either have to get my render output to electron or electrons output to my engine. As far as i can tell the easiest solution would be to copy the data back to cpu then transfer it. But that would be extremely slow and cost a lot of bandwidth. So i was wondering if there is a better solution.
I have two ideas to make it work but i didnt find any ways to implement them or even anyone talking about it.
The first would be to have electron configured to run on the gpu somehow get the handle for the output texture and importing it into my render engine using vulkan external memory. However as i have no experience with chromium and there doesnt seem to be anyone else that did it this i dont think it would work out to well.
The second idea was to do the opposite. Using a canvas element with webgl and again using vulkan external memory to copy the output of my engine to a texture and displaying it. I have full control over the draw process here so i think it would be a lot simpler and more stable. However again i found no way of setting up a webGL texture handle as an external memory object.
Is there any better way of doing this or some help on how to implement it?
I'm super new to 3D graphics (I've been trying to learn actual WebGL instead of using a framework) and I'm now in the constructive solid geometry phase. I know sites like tinkercad.com use CSG with WebGL, but they have things set so that your design is calculated every time you load the page instead of doing the subtraction, addition and intersection of primitive objects once and then storing those end design vertices for later use. I'm curious if anybody knows why they're doing things that way (maybe just to conserve resources on the server?) and if there isn't some straightforward way of extracting those vertices right before the draw call? Maybe a built in function of WebGL? Haven't found anything so far, when I try logging the object data from gl.bufferData() I'm getting multiple Float32Arrays (one for each object that was unionized together) instead of one complete set of vertices.
By the way, the only github I've found with CSG for WebGL is this https://github.com/evanw/csg.js/ and it's pretty straightforward, however it uses a framework and was curious if you know of any other CSG WebGL code out there that doesn't rely on a framework. I'd like to write it myself either way, but just being able to see what others have done would be nice.
Context
I am building this multiplayer D&D encounter tool (in C# for WinRT) which is a simple 2D grid based utility where each player can control their character with the DM being able to control the whole environment as well as mobs etc. I decided to use DirectX (through the SharpDX wrapper) as opposed to standard XAML elements (which would probably handle this scenario too) mainly to learn DirectX and possibly to achieve better performance on lower spec PCs (not sure how XAML would handle thousands of sprites on the screen).
Question
Since I am new to DirectX I was wondering what is the best* way to draw the sprites (characters, props, background tiles, etc)?
*By best I mean in terms of performance and GPU memory requirements. My aim is to have this tool to run well even on low spec ARM tablets and I have no idea how demanding each approach is.
Currently I am using the Direct2D BitmapSourceEffect and load each sprite batch using the WIC API. Another approach which I also encountered in various DirectX books is described in this SO question: What is the best pratice[sic] to render sprites in DirectX 11?
Using the BitmapSourceEffect seems to be easier but is it the right tool to do the job? I plan to compare the two approaches later today (in terms of FPS, rendering a large number of sprites), but perhaps you guys can provide more insight into this matter. Thanks!
Has anyone tried using Scaleform for actual game asset rendering in an iOS game, not just UI. The goal is to utilize vector swf's that will be converted to polygons via Scaleform but have C++ code driving the game (no AS3). If you have tried it how did you feel about the results? Could it render fast enough?
Scaleform has been used in several iOS games as the entire engine (including AS3). Here are some examples:
TinyThief: http://inthefold.autodesk.com/in_the_fold/2013/07/5-ants-brings-tiny-thief-to-ios-and-android-with-autodesk-scaleform-mobile-sdk.html
You Don't Know Jack: http://inthefold.autodesk.com/in_the_fold/2013/01/you-dont-know-jack-qa.html
You can certainly use Scaleform for this purpose. Scaleform includes the Direct Access API (DAPI) which allows C++ to manage Flash resources (this includes creating symbol instances at runtime and managing their states + lifetime).
The GFx::Value class is the basis of DAPI and should provide most, if not all, of the functionality you would need. You may still need some AS3 code to glue some things together, but that should be negligible.
Performance of static vector content is dependent on the complexity of the shape (more paths, more styles => more triangles + batches). I'd try to limit the number of vector (shape) timeline animations because shape morphing will cause re-tessellation. Scaling vector content will also cause re-tessellation, so keep that in mind.
I am working on an image processing app for the iOS, and one of the various stages of my application is a vector based image posterization/color detection.
Now, I've written the code that can, per-pixel, determine the posterized color, but going through each and every pixel in an image, I imagine, would be quite difficult for the processor if the iOS. As such, I was wondering if it is possible to use the graphics processor instead;
I'd like to create a sort of "pixel shader" which uses OpenGL-ES, or some other rendering technology to process and posterize the image quickly. I have no idea where to start (I've written simple shaders for Unity3D, but never done the underlying programming for them).
Can anyone point me in the correct direction?
I'm going to come at this sideways and suggest you try out Brad Larson's GPUImage framework, which describes itself as "a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies". I haven't used it and assume you'll need to do some GL reading to add your own filtering but it'll handle so much of the boilerplate stuff and provides so many prepackaged filters that it's definitely worth looking into. It doesn't sound like you're otherwise particularly interested in OpenGL so there's no real reason to look into it.
I will add the sole consideration that under iOS 4 I found it often faster to do work on the CPU (using GCD to distribute it amongst cores) than on the GPU where I needed to be able to read the results back at the end for any sort of serial access. That's because OpenGL is generally designed so that you upload an image and then it converts it into whatever format it wants and if you want to read it back then it converts it back to the one format you expect to receive it in and copies it to where you want it. So what you save on the GPU you pay for because the GL driver has to shunt and rearrange memory. As of iOS 5 Apple have introduced a special mechanism that effectively gives you direct CPU access to OpenGL's texture store so that's probably not a concern any more.