Webgl: destroy or reuse buffer/texture? - webgl

I am a newbie in WebGL here. :)
I understand vertex data and texture should not not be updated very often BUT when they do change, which one is preferred:
- Destroy previous buffer (static_draw) by calling gl.deleteBuffer and create a new one.
- Reuse the same one (Dynamic_Draw to begin with)
(NO i am not using any library, just webgl directly)
Does the same rule apply to texture? Thanks
Interesting enough I cannot find existing discussions ..or maybe just missed them.

Lets first look at when we would want to delete a resource:
Since OpenGL is a C-Style API it's assumed that the user is tasked to manage memory, part of that is to also manage GPU memory. Due to encapsulation and other design principles one is not always able to share and reuse resources so the delete* functions exists to free up the allocated resources. In javascript the garbage collector makes sure that WebGL resources that go out of scope are flagged for deletion just like calling delete* does¹ resulting in the delete* functions to become rather superfluous in the context of WebGL.
With that cleared up you'd always want to update your resources where possible since you'll also have to redo all the setup that you've already done to them, e.g. setting vertex attribute pointers or filter and wrapping modes for textures.

Related

Android call View Controller method from ViewModel

I started playing around with Android JetPack (ViewModel, Architecture Components, LIfeCycle and so on).
Untill now, I was working with Model View Presenter architecture, and actually I found it a quite easy to test and maintain architecture.
On the other hand, The big advantage I can see by using ViewModels instead, is their native coupling with Activities and Fragments lifecycle, which always was one of the biggest pains for Android developer, so I think this is a very big step forward.
Said that, I think there is a big disavantage though: with this new approach, it seems much more tricky to call Activity's or Fragment's methods, because, as stated in the official docs
A ViewModel must never reference a view, Lifecycle, or any class that
may hold a reference to the activity context.
In MVP approach, the Presenter had a contract with the view and could call all of its methods. I made some research on how to address this scenario with Architecture components, but it seem there is no easy and painless way to do that: at the end you always have to handle states in ViewModel and react to these changes in Activities and Fragments. Someone suggest to use SingleLiveEvent class, which makes it a little bit easier, but still much more painfull than before.
So my question is:
The docs say you cannot reference anything holding a reference to an Activity Context (to avoid memory leaks I guess), but what if I do that and then I clear all references in ViewModel's onCleared()?
You wouldn't avoid memory leaks, because for example if you rotate your device your activity will be destroyed, then recreated, but the VM would remain the same, and it's onCleared would not be called (hence your old activity still remains in memory, since your VM still references it).
Generally the MVVM conceptually says, that the ViewModels should not know about the View and this is the beauty of architecture: there is not a better pattern, just a more suitable one, and you should choose the one, that works better for you.
I can think of couple of ways to "notify a user" of the ViewModel:
A LiveData object that changes and an Observer to this data
Send a Broadcast to a BroadcastReceiver using the Application's Context, if you don't mind accessing it statically from the ViewModel
Edit: I know this is not exactly an answer to the question, but I think it removed the need for it
but what if I do that and then I clear all references in ViewModel's onCleared()?
That is far too late. onCleared() is only called if the Activity is finished / the Fragment is removed, it is not called on configuration changes.
But you could potentially use some form of command queue to emit events only when a subscriber is available, for example DispatchWorkSubject in RxJava2-Extensions.
Just make sure you make your subscriptions in onStart and then dispose your disposable in onStop.

Data Accessor object singleton or some other pattern? (Objective C)

It seems to satisfy the three requirements here: On Design Patterns: When to use the Singleton?
I need it to exist only once.
I need to access it from all over the source base.
It handles concurrent access, i.e. Locks for writes, but can handle concurrent reads.
Hi all,
I've been reading a lot of no doubt intelligent educated and wise gems of advice that Singletons are 'Evil' and singletons are anti patterns or just plain bad news.
With the argument that logging makes sense but not much else.
Just interested to know if the case of essentially a persistent data store context makes sense for a singleton, i.e. Reading/Writing from disk to memory and referencing an object graph.
And if not, how do people normally tackle this issue, I don't currently have any problem with it, I know it's created only once, it's fast and the accessor logic is in one place. Meaning it takes me one line of code to do anything data model related.
Which leaves the only argument that it's bad for testing, in that it's a hard coded production implementation to data, but couldn't I just write a method swizzle through a category or in the test code to create the test version of the singleton?
And the final argument from DI testers, is that it is a hard coded implementation, rather than simply an interface to something, which I do get but I don't really have a major drive to use a DI framework given that I can use protocols for implementation, and use separate init methods for setting up an objects state in testing. There's only ever going to be two types of state for the singleton, or realistically one type...production.
Making it (in my opinion) much easier to read and faster to develop.
Change my view SO?
Yup, for some singletons are Evil. For the new developers who has little MRC knowledge and more ARC it sounds scary because they need to mess with memory,volatile,synchronize etc.
However it is not against to use them, they indeed has their own purpose to use with some are below.
when sharing large data models like arrays and dictionaries etc between multiple screens (VC's) we can't prefer storing them in UserDefaults (because userdefaults is permanent storage and storing such large entries make app start lazily) instead singletons are best since they stay only current app context and restarting app creates new one.
when we need a stable db connection to be accessible allover the app without messing up with connecting and closing in every business classes we can go for it.
when we wanted an ability to app for theming itself dynamically we would need to create a singleton class which holds all the color,image instances etc. and use that instance in application VC/Views etc so no code duplication and re-processing theme occurs in all places.
You dont have to change your view but tweak it a bit to get some positive intention towards singletons.
Hoping this clears it out, thanks

How to start building a searchable garbage collector in Delphi (2009-2010)

I'm looking for a way to control all business objects I create in my applications written in Delphi.
As an article on Embarcadero's EDN (http://edn.embarcadero.com/article/28217) states, there are basically three ways to do this. I'm mostly interested in the last one, using interfaces. That way, when the business object is no longer being referenced anywhere in the application, it will be dispose of memory wise (I'll get back on this part later).
When creating a new business object, it would be wise to ask that new object manager whether I already fetched it earlier in the program, thus avoid the need to refetch it from the database. I already have the business object in memory, so why not use that one? Thus I'll need the list of available objects in memory to be searchable (fast).
The provided code there uses an "array of TObject" to store the collected objects, which doesn't make it very performant concerning searches through the list of objects once you get to a certain amount. I would have to change that to either TObjectList or some sort of binary searchable tree. What would be the best choice here? I already found some useful code (I think) at http://www.ibrtses.com/delphi/binarytree.html. Didn't JCL have stuff on binary trees?
How would I handle "business objects" and "lists of business objects" in that tree? Would a business object, being part of a list be referenced twice in the tree?
Concerning the disposing of an object: I also want to set some sort of TTL (time to life) to that business object, forcing a refetch after an certain amount of time.
Should the reference counter fall to 0, I still want to keep the object there for a certain amount of time, should the program still want it within the TTL. That means I'll need sort sort of threaded monitor looping the object list (or tree) to watch for to-be-deleted objects.
I also came across the Boehm Garbage Collector DLL (http://codecentral.embarcadero.com/Download.aspx?id=21646).
So in short, would it be wise to base my "object manager" on the source code provided in the EDN article? What kind of list would I want to store my objects in? How should I handle list of objects in my list? And should I still keep my object in memory for a while and have it dispose of by a threaded monitor?
Am I correct in my reasoning? Any suggestions, ideas or remarks before I start coding? Maybe some new ideas to incorporate into my code?
Btw, I'd be happy to share the result, for others to benefit, once some brilliant minds gave it a thought.
Thnx.
If you are using Interfaces for reference counting, and then stick those in a collection of some sort, then you will always have a reference to them. If your objective is "garbage collection" then you only need one or the other, but you can certainly use both if necessary.
What it sounds like you really want is a business object cache. For that you will want to use one of the new generic TDictionary collections. What you might want to do is have a TDictionary of TDictionary collections, one TDictionary for each of your object types. You could key your main TDictionary on an enumeration, or maybe even on the type of the object itself (I haven't tried that, but it might work.) If you are using GUIDs for your unique identifiers then you can put them all in a single TDictionary.
Implement each of your business objects with an interface. You don't need to use Smart Pointers since you are designing your business objects and can descend them from TInterfacedObject. Then only reference it by its interface, so it can be reference counted.
If you want to expire your cache then you will need to have some sort of timestamp on your objects that gets updated each time an object is retrieved from the cache. Then when the cache gets over some specific size you can prune everything older then a certain timestamp. Of course that requires walking the entire cache to do that.
Since you are combining interfaces and a collection then if you have a reference to an object (via its interface), and it gets pruned during cache cleanup, then the object will remain alive until the reference goes away. This provides you an additional safety. Of course if you are still using the reference, then that means you kept the reference for a long time without retrieving it from the cache. In that case you may want to update the timestamp when you read or write to the properties too . . . A lot of that depends on how you will be using the business objects.
As far as refetching, you only want to do that if an object is retrieved from the cache that is older then the refetch limit. That way if it gets pruned before you use it again you are not wasting database trips.
You might consider just having a last modified time in each table. Then when you get an object from the cache you just check the time in memory against the time in the database. If the object has been changed since it was last retrieved, you can update it.
I would limit updating objects only to when they are being retrieved from the cache. That way you are less likely to modify the object while it is use. If you are in the middle of reading data from an object while it changes that can produce some really odd behavior. There are a few ways to handle that, depending on how you use things.
Word of warning about using interfaces, you should not have both object and interfaces references to the same object. Doing so can cause trouble with the reference counting and result in objects being freed while you still have an object reference.
I am sure there will be some feedback on this, so pick what sounds like the best solution for you. . . .
Of course now that I have written all of this I will suggest you look at some sort of business object framework out there. RemObjects has a nice framework, and I am sure there are others.
You might want to start by looking at smart pointers. Barry kelly has an implimentation for D2009.
For your BI objects, I would use a guid as the key field, or an integer that is unique across the database. As you load objects into memory, you could store them in a dictionary using the guid as the key and a container object as the value.
The container object contains the bi object, the ttl etc.
Before loading an object, check the dictionary to see if it is already there. If it is there, check the ttl and use it, or reload and store it.
For very fast "by name" lookup in your container object, I suggest you look not to trees, but to hash tables. The EZ-DSL (Easy Data Structures) library for Delphi includes the EHash.pas unit, which I use for my hashing containers. If you are interested in that, I will forward it to you.
I tend to think of "lifetime" oriented "containers" that delete all the objects they own when they close down.
I also think that you might consider making the objects themselves count their "usefulness", by having a "FLastUsed:Cardinal" data field. You can assign FLastUsed := GetTickCount and then have the system basically obey limits that you set up, maximum memory or instances to be actively used and how old an object should be (not used at all in X milliseconds) before it gets "demoted" to level 2, level 3, etc.
I am thinking that for Business Objects, you have a memory cost (keep it around when it could be really only a cache), and a coherency (flush my changes to the database before you can destroy me) constraint that makes traditional "garbage collection" ideas necessary, but not sufficient, for the whole problem of business object lifetime management.
For a higher level point of view, I recommend two books: "Patterns of Enterprise Application Architecture" by Martin Fowler and Eric Evans book about Domain-Driven Design, "Domain-Driven Design: Tackling Complexity in the Heart of Software" (http://dddcommunity.org/books#DDD). Martin Fowler explains all patterns for object management for business applications, including object repositories and different object / database mapping strategies. Eric Evans book "...provides a broad framework for making design decisions...".
There are aleady some (open source) O/R mapper libraries for Delphi with active community, maybe their source code also can provide some technical guidelines.
You are looking for a way to build a container that can store business objects and able to find already instanciated ones a runtime ?
Maybe you could give a look at http://www.danieleteti.it/?p=199 (Inversion of Control and Dependency Injection patterns.)

What strategy should be used when exposing c++ to Lua

I have a c++ library which has functionality exposed to Lua, and am seeking opinions on the best ways to organise my lua code.
The library is a game engine, with a component based Game Object system. I want to be able to write some of these components as classes in Lua. I am using LuaBind, so I can do this but there are some implementation choices I must make, and would like to know how others have done it.
Should I have just one global lua_State, or one per object, one per scene, etc?
This sounds like a lot of memory overhead, but will keep everything nice and separate.
Should I have one GLOBALS table, or one per object, which can be put in place before a call to a member? This would seem to minimize the chances of some class deciding to use globals, and another accidentally overwriting it, with less memory overhead than having many lua_States.
Or should I just bung everything in the one globals table?
Another question involves the lua code ittself. Two strategies occur... Firstly shoving all class definitions in one place, loading them when the application launches, Secondly putting one class definition per file, and simply making sure that file is loaded when I need to instance it.
I'd appreciate anyone's thoughts on this, thanks.
While LuaBind is certainly very nifty and convenient, as your engine grows, so will your compile times, drastically.
If you already have, or are planning to add, a messaging system (which I heavily recommend, specially for networking), then it simplifies problems significantly. In this case, what you will need to do is simply bind a few key functions to interface with the messaging system. This will keep your compile times down, and give you a very flexible system.
Since you are doing a component based engine (Good choice BTW), It makes more sense to integrate scripting as an object component. This way, it usually makes more sense to make each scripting component a new coroutine that runs behavior for each particular object. You need not to worry about memory too much, Lua states are very light, and can be made really fast if you interface your memory manager with Lua.
If you implement scripting as a component, it is still a good idea to have global or per-level scripts loaded, (to coordinate event triggers by other objects, or maybe enemy spawning timers).
As far as loading scripts go, it would not be bad practice, to just load the scripts you will need for a level all at once, and keep them in a global table for fas accessing, loading of lua scripts is pretty fast, specially if you pre-compiled them.
One consideration is how you're planning to thread things. If you want to run the code for two Game Objects in parallel, for example, then they really ought to have their own separate lua_States so that they can both be running at the same time. (Of course, that also means that they can't really share any state, except via the C code where you'd need to be conscious of thread-safety.)
As to the Lua code, I'd recommend loading everything when the app launches (unless you really need to do "lazy" loading of your core classes on demand). It typically simplifies maintenance and debugging. And in the case of code being loaded that's no longer needed, the garbage collector will clean that up with a quickness. :-)

Is it worth caching objects created by Delphi's Memory Manager?

I have an application that creates, and destroys thousands of objects. Is it worth caching and reusing objects, or is Delphi's memory manager fast enough that creating and destroying objects multiple times is not that great an overhead (as opposed to keeping track of a cache) When I say worth it, of course I'm looking for a performance boost.
From recent testing - if object creation is not expensive (i.e. doesn't depend on external resources - accessing files, registry, database ...) then you'll have a hard time beating Delphi's memory manager. It is that fast.
That of course holds if you're using a recent Delphi - if not, get FastMM4 from SourceForge and use it instead of Delphi's internal MM.
Memory allocation is only a small part of why you would want to cache. You need to know the full cost of constructing a semantically valid object, and compare it with the cost of retrieving items from the cache, and not just for a micro-benchmark: cache effects (CPU cache, that is) may change the runtime dynamics in a real live running application.
Or to put it another way, measure it and find out. If you're not measuring, you're not engineering, just guessing.
Only a profiler will tell you. Try both approaches in a tight loop and see what comes out on top :-)
You absolutely have to measure with real-world loads to answer questions like this. Depending on what resources are held in those objects, any resource contention, construction cost, size, etc., the answer may surprise you, and may even change depending on the nature of the load.
It is usually very difficult to determine where your performance issues will be without measuring.
I think this depends on the code your objects will execute during create and destroy. The impact from TObject.Create and TObject.Destroy is normally neglectable and may easily be outweight by the caching overhead.
You should also consider that the state of an object may differ when reused from that after just being created.
Often the only way to tell - is to try it.
If current performance is adequate then you don't have much call to try and increase it. However, if you have performance issues, then some caching (or indeed some other strategies) may help.
You will also need some stats on how often a specific object (instance) is being used. If you're referencing the same set of data regularly, than caching may really improve performance but if the accesses are distributed across all the possible objects, than your cache miss-rate might be too high for it to be worth-while.

Resources