Does anybody know about present and future of Sharing Resources across WebGLRendering Contexts ?
Now I make hidden gl handler that makes me useful images in the "timeout process"(I cant use main context for creation my little images), then I create samplers from these images in the main context. I think sharing resources between webgl contexts could help me out.
Related
I hope you are doing well! I am relatively new to Electron and after reading numerous articles, I am still confused on where I should put heavy computing functions in Electron. I plan on using node libraries in these functions and I have read numerous articles that state that these functions should be put in the main process. However, isn't there a chance that this would possibly overhead my main process and thus, block my renderers? This is definitely not desired and I was wondering why could I not just put these functions in preload.js. Wouldn't it be better for performance? Also, if I am only going to require node modules and only connect to my API, would there still be security concerns if I were to put these functions in the preload.js? Sorry for the basic questions and please let me know!
Thanks
You can use web workers created in your renderer thread. They won't block.
However you mentioned planning to use node modules. So depending on what they are, it could make more sense to run them from the main process. (But see also https://www.electronjs.org/docs/latest/tutorial/multithreading which points out that you can set nodeIntegrationInWorker, independently of nodeIntegration)
You can use https://nodejs.org/api/worker_threads.html in Node too, or for a process-level separation there is also https://nodejs.org/api/child_process.html.
Note that worker threads in the browser (and therefore the renderer thread) cannot share memory. Instead you have to serialize it to pass it back and forth. If your heavy compute process is working on large data structures, bear this in mind. I notice that node worker threads say they do allow sharing memory between threads.
I have read several tutorials that recommend using two (or more) NSManageObjectContexts when implementing core data, so as not to block the UI of the main queue. I am a little confused, however, because some recommend making the child context of the persistent store coordinator that of type mainQueueConcurrencyType, and then giving it its own child context of type privateQueueConcurrencyType, while others suggest the opposite.
I would personally think the the best setup for using two contexts would be to have the persistent store coordinator -> privateQueueConcurrencyType -> mainQueueConcurrencyType, and then only saving to the private context, and only reading from the main context. My understanding of the benefits of this setup is that saving to the private context won't have to go through the main context, as well as reading on the main context will always include the changes that are made on the private context.
I know that many apps require a unique solution that this setup might not work for, but as a general good practice, does this make sense?
Edit:
Some people have pointed out that this setup isn’t necessary with the introduction of NSPersistentContainer. The reason I am asking about it is because I’ve inherited a huge project at work that uses a pre-iOS-10 setup, and its experiencing issues.
I am open to re-writing our core data stack using NSPersistentContainer, but I wouldn't be comfortable spending the time on it unless I could find an example of how it should be setup with respect to our use cases ahead of time.
Here are the steps that most of our main use cases follow:
1) User edits object (eg. adds a photo/text to an abstract object).
2) An object (sync task) is created to encapsulate an API call to update the edited object on the server. Sync tasks are saved to core data in a queue to fire one after the other, and only when internet is available (thus allowing offline editing).
3) The edited object is also immediately saved to core data and then returned to the user so that the UI reflects its updates.
With NSPersistentContainer, would having all the writing done in performBackgroundTask, and all the viewing done on viewContext suffice for our needs for the above use cases?
Since iOS10 you don't need to worry about any of this, just use the contexts NSPersistentContainer provides for you.
I have a long list of cells which each contain an image.
The images are large on disk, as they are used for other things in the app like wallpapers etc.
I am familiar with the normal Android process for resampling large bitmaps and disposing of them when no longer needed.
However, I feel like trying to resample the images on the fly in a list adapter would be inefficient without caching them once decoded, otherwise a fling would spawn many threads and I will have to manage cancelling unneeded images etc etc.
The app is built making extensive use of the fantastic MVVMCross framework. I was thinking about using the MvxImageViews as these can load images from disk and cache them easily. The thing is, I need to resample them before they are cached.
My question is, does anybody know of an established pattern to do this in MVVMCross, or have any suggestions as to how I might go about achieving it? Do I need to customise the Download Cache plugin? Any suggestions would be great :)
OK, I think I have found my answer. I had been accidentally looking at the old MVVMCross 3.1 version of the DownloadCache Plugin / MvxLocalFileImageLoader.
After cloning the up to date (v3.5) repo I found that this functionality has been added. Local files are now cached and can be resampled on first load :)
The MvxImageView has a Max Height / Width setter method that propagates out to its MvxImageHelper, which in turn sends it to the MvxLocalFileImageLoader.
One thing to note is that the resampling only happens if you are loading from a file, not if you are using a resource id.
Source is here: https://github.com/MvvmCross/MvvmCross/blob/3.5/Plugins/Cirrious/DownloadCache/Cirrious.MvvmCross.Plugins.DownloadCache.Droid/MvxAndroidLocalFileImageLoader.cs
Once again MVVMCross saves my day ^_^
UPDATE:
Now I actually have it all working, here are some pointers:
As I noted in the comments, the local image caching is only currently available on the 3.5.2 alpha MVVMCross. This was incompatible with my project, so using 3.5.1 I created my own copies of the 3.5.2a MvxImageView, MvxImageHelper and MvxAndroidLocalFileImageLoader, along with their interfaces, and registered them in the Setup class.
I modified the MvxAndroidLocalFileImageLoader to also resample resources, not just files.
You have to bind to the MvxImageView's ImageUrl property using the "res:" prefix as documented here (Struggling to bind local images in an MvxImageView with MvvmCross); If you bind to 'DrawableId' this assigns the image directly to the underlying ImageView and no caching / resampling happens.
I needed to be able to set the customised MvxImageview's Max Height / Width for resampling after the layout was inflated/bound, but before the images were retrieved (I wanted to set them during 'OnMeasure', but the images had already been loaded by then). There is probably a better way but I hacked in a bool flag 'SizeSet'. The image url is temporarily stored if this is false (i.e. during the initial binding). Once this is set to true (after OnMeasure), the stored url is passed to the underlying ImageHelper to be loaded.
One section of the app uses full screen images as the background of fragments in a pager adapter. The bitmaps are not getting garbage collected quick enough, leading to eventual OOMs when trying to load the next large image. Manually calling GC.Collect() when the fragments are destroyed frees up the memory but causes a UI stutter and also wipes the cache as it uses weak refs.
I was getting frequent SIGSEGV crashes on Lollipop when moving between fragments in the pager adapter (they never happened on KitKat). I managed to work around the issue by adding a SetImageBitmap(null) to the ImageView's Dispose method. I then call Dispose() on the ImageView in its containing fragment's OnDestroyView().
Hope this helps someone, as it took me a while!
Apple's docs say I'm doing everything right, but I get a hard crash with Apple's driver crashing internally 100% reproducibly on a simple OpenGL ES 2 program.
It appears there is a major bug in Apple's driver w.r.t. multithreaded access while following Apple's instructions on multithreaded access. Or ... I'm missing something in the docs, even though I've read, re-read them multiple times :(.
I would be extremely happy to use either NSOperations or GCD (they're implemented the same under the hood anyway), but I cannot get either to work.
Here's what I know / have tried:
All on main thread = works fine:
Setup GL, render a single triangle, load textures, load geometry, render full scene
GCD = hard crash as soon as the background GCD "finishes"
Setup GL on main thread, render a single triangle
dispatch_async() to create new EAGLContext, load textures, geometry
Render on main thread
NSOperationQueue = hard crash as soon as the background GCD "finishes"
Setup GL on main thread,render a single triangle, create new NSOperationQueue
addOperationWithBlock: to create new EAGLContext, load textures, geometry
Render on main thread
Additional notes / things Apple instructs us to do:
CHECKED: I'm connecting the new EAGLContext to the old by sharing the ShareGroup property
CHECKED: I tried explicitly glFlush() immediately after loading textures + geometry (no effect either way)
CHECKED: I tried re-using the original thread's EAGLContext (bad idea!) - different crash, not surprising since Apple says this is undefined and will crash
I am not 100% certain what you are trying to do, but consider the two following things:
An OpenGL ES 2 context cannot be live on more than one thread at the same time.
However, you can use a single context in several threads if you always remember to switch each thread between null context and your context so you won't have your context set in two threads at the same time.
This part I am less certain about. You can have shared contexts, but the shared context should be used only for loading textures for the first context, it's not suppose to be used for rendering as well.
Not sure how it's enforced in practice and what happens if you do render with a shared context.
Hope this helps you.
I can tell you, at least, you can load Textures on a Serial Queue with sharegroup-ed GLContext. In my code, and in cocos2d for iPhone code, No problems at all.
You should check the cocos2d code for that.
https://github.com/cocos2d/cocos2d-objc/blob/v3.4/cocos2d/CCTextureCache.m
I've been learning OpenGL ES 2.0/GLSL and related iOS quirks by looking at code and developer videos and I've noticed that there's never any mention of asynchronous shader compilation. Aside from instructors, writers, or salesmen (er, engineers) worrying about adding complexity to their examples, is there a reason for that?
For example, most web data retrieval tutorials hammer home the need for doing some sort of gymnastics (pthreads, NSOperation, GCD, baked in asynch instance methods, etc) to keep from blocking the main thread- why would blocking an app launch be considered acceptable?
It can be a little bit tricky to synchronize two EAGLContext's, beside that, there is nothing against loading this kind of stuff in the background (generally, loading every kind of asset, textures, shaders etc).
Probably the real reasons are that most people think of OpenGL (ES) as something monolithic that only works on one single thread or they never had an issue with loading times that made it worth to load stuff in a background thread or they just don't care (for some people its probably everything together).
For your last question: Networking can add a HUGE latence and with "can" I mean "will". Resource loading isn't that problematic, compared to a network access, loading a shader or texture takes way less time and its already known ahead how much time it will take in the normal case. Plus, people are used to loading screens in game while they don't want to see loading screens when they scroll a table view just so that your application can fetch a picture from a server that doesn't respond.