I have already unsubscribed a table in DolphinDB, but when I executed the function getStreamEngineStat().AsofJoinEngine, there was still memory occupied by the engine.
Does the function return the real-time memory information on the stream engine? How can I check the current memory status and release the memory?
getStreamEngineStat().AsofJoinEngine does return the real-time memory information.
Undefining the subscription is not equivalent to dropping the engine. In your case, if you want to release the memory occupied by the engine, you need first use dropStreamEngine and set the variable returned by createEngine to be NULL.
Specific examples are as follows:
Suppose you define the engine as follows:
ajEngine=createAsofJoinEngine(name="aj1", leftTable=trades, rightTable=quotes, outputTable=prevailingQuotes, metrics=<[price, bid, ask, abs(price-(bid+ask)/2)]>, matchingColumn=`sym, timeColumn=`time, useSystemTime=false, delayedTime=1)
After undefining the subscription, you can correctly release memory by the following code:
dropStreamEngine("aj1") // 1.release the specified engine
ajEngine = NULL // 2. Release the engine handler from memory
If you find that the memory of the asofjoin engine is too large in the subscription, you can also specify the parameter garbageSize to clean up historical data that is no longer needed. The size of garbageSize needs to be set case by case, roughly according to how many records each key will have per hour.
The principle is: First, garbageSize is set for the key, that is, the data within each key group will be cleaned up. If garbageSize is too small, frequent cleaning of historical data will bring unnecessary overhead; if garbageSize is too large, it may not reach the threshold of garbageSize and cannot trigger cleaning, resulting in a leftover of unwanted historical data.
Therefore, it is recommended to clean up once per hour and set the garbageSize with a rough estimate.
I've been programming a solitaire card game and all has been going well so far, the basic engine works fine and I even programmed features like auto move on click and auto complete when won, unlimited undo/redo etc. But now I've realised the game cannot be fully resumed ie saved so as to continue from the exact position last time the game was open.
I'm wondering how an experienced programmer would approach this since it doesn't seem so simple like with other games where just saving various numbers, like the level number etc is sufficient for resuming the game.
The way it is now, all game objects are created on a new game, the cards, the slots for foundations, tableaus etc and then the cards are shuffled and dealt out. This is random but the way I see it, the game needs to remember this random deal to resume game and deal it again exactly the same when the game is resumed. Then all moves that were executed have to be executed as they were as well. So it looks like the game was as it was last time it was played, but in fact all moves have been executed from beginning again. Not sure if this is the best way to do it but am interested in other ways if there are any.
I'm wondering if any experienced programmers could tell me how they would approach this and perhaps give some tips/advice etc.
(I am going to assume this is standard, Klondike Solitaire)
I would recommend designing a save structure. Each card should have a suit and a value variable, so I would write out:
[DECK_UNTURNED]
H 1
H 10
S 7
C 2
...
[DECK_UNTURNED_END]
[DECK_TURNED]
...
[DECK_TURNED_END]
etc
I would do that for each location cards can be stacked (I believe you called them foundations), the unrevealed deck cards, the revealed deck cards, each of the seven main slots, and the four winning slots. Make sure however you read them in and out, they end up in the same order, of course.
When you go to read the file, a simple way is to read the entire file into a vector of strings. Then you iterate through the vector until you find one of your blocks.
if( vector[ iter ] == "[DECK_UNTURNED]" )
Now you go into another loop, using the same vector and iter, and keep reading in those cards until you reach the associated end block.
while( vector[ iter ] != "[DECK_UNTURNED_END]" )
read cards...
++iter
This is how I generally do all my save files. Create [DATA] blocks, and read in until you reach the end block. It is not very elaborate, but it works.
Your idea of replaying the game up to a point is good. Just save the undo info and redo it at load time.
I am trying to understand , what is meaning of transient and persistent Column in Allocation Template . From the tutorial http://www.raywenderlich.com/97886/instruments-tutorial-with-swift-getting-started I have found
"The Persistent column keeps a count of the number of objects of each type that currently exist in memory. The Transient column shows the number of objects that have existed but have since been deallocated. Persistent objects are using up memory, transient objects have had their memory released.
"
According to the explanation above , From the selected row in Statistics table from the picture , it can be said , 2 objects of NSFileManager currently exist in memory and 19 no. of objects are created and already have been released.
But what it means for optimization or performance issues for iOS App ?
Something like , here the total no of transient object in 19 which is considerably a large number , it should be small if possible for increasing app's effective memory usability or Something else ?
Optimization for performance in short means keeping your app alive and responsive.
The key metric for optimization is not transient or persistent count for one object.
Based on the information your NSFileManager is using 16 Bytes for each object.
So it's 32 currently persistent (2 * 16) and 336 (21 * 16) Total.
A high persistent memory indicates that your current footprint is very high for the given object. A high total memory indicates that your footprint in past might have been high (if subset of those allocation were simultaneous)
While optimizing you should focus on mainly two aspects:
1. How much is the minimum memory foot print when your app loads.
2. How much is the maximum memory foot print. (You need to come up with use cases to figure out this one).
As your memory footprint increases your app slows down in performance because of multiple page swaps done by OS to free up memory. You can track this by VM tracker instrument. Optimization means keeping your average memory footprint lower that point.
Persistent objects are using up memory, transient objects have had their memory released.
The first says # Persistent. This is the number of persistent objects that are being strongly referenced in your project at this moment in time. The second says # Transient. This is the number of deallocated objects that used to be strongly retained but now no longer exist. This is handy because it lets you know if an object is being cleaned up properly or if an object in no longer retained in a particular moment in time. The third says # Total. This is the total count of persistent and transient objects added together.
Like CPU simulation
I need to write an application that can simulate high memory-usage at a pre-set values ( e.g., 30%, 50%, 90% etc) for a certain duration. Meaning it will take two inputs (memoryvalue and duration). Let say i use 50% for memory-Usage and 2 minutes for Duration). This mean that when I run the application, it should take 50% memory for 2 minutes. Any ideas how this can be achieved?
Any help pls.
You can simulate a memory leak like this (taken from this thread):
var list = new List<byte[]>();
while (true)
{
list.Add(new byte[1024]); // Change the size here.
}
Similarly to the app I wrote for simulating CPU load for a specific amount of time, you just make a method allocating an amount of memory and create a timer, which when it runs out, clears the list and then invokes the Garbage Collector.
You have to watch out that if you allocate too much memory your system might become unresponsive and you might crash it.
For example, in Fallout 3, a save game stores the state and location of every single object and NPC in the game, and only takes up a few MB's. How do they do that!?!?
And then, during game play, how is this data added/retrieved in/from memory such that it can be displayed to the player in real-time?
UPDATED: (I'm going to make you work for your answers :P)
Based on Kevin Crowell's answer...
So I guess you would have a rendering distance that would apply to objects and NPC's, and you would "SELECT" the objects and NPC's within the given range. However, what type of data store would you use in order to get these objects?
In other words, you would you have a gigantic array of every object in the game, and constantly update a smaller list that holds the visible objects to render?
Also, per Chaos' answer...
Would would happen if you eventually touched every object in the game? Would your save game get bigger and bigger? In the case of Fallout 3, I'm pretty sure there aren't "stages", where the past data could just be dropped. Everything is persisted when you leave/return to a location. So how do you think this specific case is implemented?
With all the big hardisks nowaday, even developers seem to forget how many bytes there are in a megabyte. So to answer the question in the title: games store large amounts of data by creating savegames that are several megabyte large.
To illustrate how big a megabyte is, it's 8 million bits. That is sufficient to encode 2^8000000 = 10^2666666 states. In comparison, there are only 10^80 atoms in the universe. Now in a (save)game there are multiple subsystems with distinct states; e.g. in a RPG each NPC has its own state. But how much of a state is there, really? Their position in a town might be saved as 16 bits (do you remember their exact position if they're walking around anyway?). Their mood/disposition/etc as another 8 bits, and that allows for more emotions then some people have.
When it comes to storing this kind of data in-game, the typical datastructure is a QuadTree. This is a datastructure that allows you to determine objects in a certain X-Y region in O(log N). In some cases, game developers find it easier to pre-partition the world in zones. This reduces the amount of run-time calculations. A good example was Doom. Its maps had visibility pre-calculated; for each point one could determine quickly to which zone it belonged, and for each zone the amount of visible objects was pre-calculated. This reduced the amount of objects that needed runtime visibility checks.
It can simply be mapping objects, or NPCs, to an X,Y,Z coordinate plane. That information that be stored cheaply.
During gameplay, all of those objects are still mapped to a coordinate system at all times. They just need to read in the save information and start from there.
I think you're overestimating the complexity of what's being stored/retrieved. You don't need to store the 3D models for the objects, or their textures, or any of the things that make up large parts of a game's size-on-disk.
First of all, as chaos mentioned, it's only necessary to store information about things that have been moved. Even then, you probably only need to store, for each of those, the new position and orientation (assuming there's not other variables involved, like "damaged"). So that's two vectors for each object, which will be around a grand total of 24 bytes per object. That means you can store the information for 40,000 objects per megabyte. That's an awful lot of objects to have moved around.
Restoring this data is no more complex than placing the objects in the first place. Every object has to have a default position/orientation defined for the game to put it somewhere, so all you're doing is replacing the default with the stored value in the save file. This is not complex, and doesn't require any significant additional processing.
In Fallout 3 in particular, the map is divided in a grid fashion. You can only see your current square and the ones immediately next. The type of data store is not really important - can be a SQLite database, can be a tree serialized to disk, or can be something else entirely.
...you would you have a
gigantic array of every object in the
game, and constantly update a smaller
list that holds the visible objects to
render?
Generally yes, but the "gigantic array" doesn't need to be in memory. And there are more lists - objects in current and adjacent grid square (you can be attacked from behind - not in visible list), the visible list, the timer list...
Would would happen if you eventually touched every object in the game? Would your save
game get bigger and bigger?
Could - if there is a default state table for everything, the save can contain only the differences. The save will then grow as you progress.
Everything is persisted when you leave/return to a location.
Nope. Items you drop outside of your house will eventually disappear. Bodies too, probably. Random monsters are respawned every once in a while. This is both convenient to game designers and consistent with the real world.
If you think about the information you need to save it's really not that much;
E.g.
Position
Orientation
Inventory
Health
Objective-state
There are lots more of course, many of which dependend on both the type of game and how the save structure is organized.
Some games like Resident Evil only allow saves when you enter a new zone meaning you don't have to store all the information for entities in both zones. When you "load" a save their attributes come from the disc.
As to how this is data is retrieved/modofied, I'm not quite sure I understand. It's just data in the consoles memory. When the player saves it's written to the save device, and when they load it's restored.
One major technique is differential saves: only saving state that's something other than its default. Compare and contrast "saving the state and location of every object in the game world" with "saving the state and location of every object in the game world that the player has moved or altered".
Echoing the other answers, the biggest savings comes from eliminating all unnecessary state data.
If you look at 8-bit side-scroller games, they will start discarding state as soon as things are offscreen, and oftentimes retain nothing, because their resources are too tight to keep around more than the minimum number of instances.
Doing it on the macro-level for a game like Fallout 3 is just a matter of increasing the scope of the problem. You start sectioning up the landscape by grid or other geometrical methods, and spawn/despawn stuff as the player moves from one section to the next. You ideally keep the size of each area small so that in-memory state is not high. You figure out the bare minimum of state needed to keep around NPC and item instances, and in the layout data you tag as much as possible to auto-respawn so that it doesn't need any state saved.
If you want to be pointed at a specific data structure, an example serialization format might be a linear stream indexed by a tree of pointers, where the organization of the tree corresponds to the map layout.
On a related note, game engines often employ Zip compression, to keep the size of all that content down and also make some operations faster.
Besides what everybody else said, i would like to add state doesn't necessarly imply just position and movement,but also properites for the respective state. Usually a Game Engine has a feature witch allows you to save the data of a certain class.
Say you have a Player class and you are well into the story, when you click save the possible data that can be stored is :
Where is the player located in the
level/map
What are his attributes :
health,mana,strenght,
intelligence,etc
What skills does he have.
What level is he.
Globally we can also have:
How many references (names that allow the engine to pick up an object from a list) to objects are being stored in that specific level,in other words when you load what objects should be loaded along with it.
Are we using physics, if so who uses it.
And many more. Fallout 3 has one type of save, another game will have another. It really depends on the genre and the engine in use.