Quick tips to improve pylon playback performance - pylons

Turn off all the data channels (visualizations) that are not in use.
If high resolution is not needed, set FPS to ow to reduce amount of data loaded, then turn it back up and refresh it only the spot of interest is identified. Use the Copy link button to save the time offset url.
Turn off large data channels. Seek to the instance of interest and then turn ON the large data channel.
Using the left/right arrow keys to step through the frame gives a bit smoother performance than randomly seeking.
Concurrency is basically the amount of worker to fetch data. But if they user’s machine is already reaching the limit of data transfer, having more concurrency is just more overhead (timestamp 46:40)
(warning) Setting Buffer value very high might lead to loading data that is not needed.
Sometimes the “performance glitch” could be because of data unavailability or some other issue. DP is working on a over haul to Pylon to show users the (un)availability of data on the UI.

Related

CoreData and a large initial data load

I'm performing a large initial data load into coredata on iOS. My app comes loaded with about 500 compressed JSON files (about 250MB inflated). I sequentially decompress and load each file into coredata. I'm pretty sure I am closing all the streams. In the beginning I had a large connected graph using relationships. I have since replaced all relationships with URIs; so, there is no explicit object graph. My store type is SQLlite. I'm assuming an iPhone 6 at iOS 12+.
Data is loaded on a background queue and I can watch the progress on my iOS device. I do periodic context saves and resets at logical stopping points.
My problem is the initial data load consumes too much memory, about 600MB before it's terminated because of memory issues. If I stop the loading about half way, memory consumption will peak at 300MB+ and then rapidly fall off to to 13MB. It doesn't look like a memory leak. Apart from the managed object context there are no objects that span the breadth of the load.
What this looks like to me is that CoreData is not flushing anything to storage because it's lower priority than my inserts. In other words, the backlog grows too fast.
If I add a delay using asyncAfter to the last half of the data load, it will start at the low water mark or about 13MB.
So here are the questions:
does my diagnosis seem plausible?
is there some magic method that will cause CoreData to flush its cache
can I create a queue that is lower priority than whatever coredata is using to flush objects to storage?
conversely, do I need to throttle my inserts?
am I using coredata in a manner it wasn't designed for (i.e., ~250MB data base)?

Lightweight iOS performance profiling

I need to do some performance profiling of an iOS app (including CPU usage, Memory usage, network usage). I need a way to store the results with graphs of what those metrics look like to compare over time. I need useful/helpful graphs and hopefully smallish in size, I’m not necessarily interested in stack traces across all threads for each time slice or any of that type of additional fluff.
I have tried instruments with adding time profiler (and some of the other templates), but I have 2 big issues:
The graphs are kind of tiny to the eye/not particularly helpful.
a 30 second profile used up something like 100mb, which is too big for what I’m looking for with regards to long term storage as each profiling session will probably 10+ minutes
You can do 2 things:
After entering Instruments, there are Record and Pause buttons. You can use Pause button to pause and unpause your desired operation profiling.
Under Instruments->Preferences->Recording tab, there is a Background Sampling Duration parameter - it allows you to specify how often it records the activity. Play with this parameter. You may get your desired file size.
If you observe below screenshot: There is also one more parameter named max backtrace depth. It changes size of your recorded call stack. You can also play with it to observe file size changes.

AVX2 Streaming Stores Do Not Improve Performance

I have an AVX2 implementation of some workload.
I have determined that the vast majority of the execution time is occupied
by the memory loads and stores.
In an attempt to improve performance, I tried to change the conventional stores
to streaming (non-temporal) stores.
However, this change had little to no positive performance impact (I was expecting a sizeable performance increase).
What could be the reason for this?
The use of streaming stores can lead to a better performance under some circumstances:
The data "to be stored" is not read before writing: Streaming stores are write-through, which produces immediate bus traffic. The standard store uses a write back strategy which may delay the bus operation until a later time and avoids bus operations with multiple writes to the same cache line.
The time used for stores is smaller than the time used for calculation: A streaming store has to be finished before the next streaming store can be issued. Thus, ahving too liitle computation in between two streaming stores leads to some idle time for the processor in which no further computation can be executed. Where this problem may also be possible with standard stores, streaming stores even increase it.
The data "to be stored" is not needed shortly after being written: The streaming store surpasses caches while writing/storing. Thus, there is no copy of the data in the cache. When reading the data aftwerwards the data has to be loaded into the cache. Thus, you have no gain over a standard store. However, when using a standard store, the data is loaded into the cache, modified there, and maybe still there when a later access happens.
So you have to consider your code and problem, to these circumstances to know if streaming stores are worth a try. In an unfitting scenario your performance might even drop.
A blog entry with additional info and a benchmark can be found e.g. here.

What is buffer? What are buffered reads and writes?

I heard the word buffer after a long time today and wondering if somebody can give a good overview of buffer and some examples of how it matters in today's world.
A buffer is generally a portion of memory that contains data that has not yet been fully committed to its intended device. In the case of buffered I/O, generally there is a fast device and a slow device. The devices themselves need not have disparate speeds, but perhaps the interfaces between them differ or perhaps it is more time-consuming to either produce or consume the data than the other part is.
The idea is that you temporarily store the generated data in a buffer so that it is not lost when the slower device isn't ready to handle it. Once the device is ready, the another buffer may take the current buffer's place and the consuming device will process the data in the first buffer.
In this manner, the slower device receives the data at a moderated pace rather than the fire-hose that the original data source can be.

How many 'screens' of data could a game store before having to delete some?

Assuming I was making a Temporal-esque time travel game, and wanted a to save the current state of the screen (location of player and enemies, whether or not destructible objects are destroyed, et cetera) every second to an array, how much data would I be able to store on this array before the game would start to lag considerably and I would have to either delete the array or save it to a file out of the game (ie: a .bin).
On a similar note, is it faster to constantly save every screen to a .bin, or to only do this when it is necessary (start saving when the array is halfway 'full', for example).
And I know that the computer it is being run on matters, but assume it is being run on a reasonably modern computer (not old, but not a nasa supercompeter either), particularily because I have no way of telling exactly what the people who play the game will be using.
Depending on how you use the data afterwards, you could consider storing the changes between states instead of the actual states.
You should use a buffer to reduce the number of I/O-operations. Put data in main memory and write a larger amount of data to disk when needed.
It would depend on the amount of objects you needs to save and how much memory is taken up by each object.
Hypothetically, let's take a vastly oversimplified and naive example, and say that your game contains an average of 40 objects, each of which has 20 properties that take up two bytes of storage. That's 1600 bytes per second, if you're saving each second.
Well it is impossible to give you an answer that will definitely work for your scenario. You'll need to try a few things.
Assuming you are loading large images, sounds, or graphics from disk it may not be good to write to disk with high frequency due to contention. I say may because it really depends on th computer and everything that is going on. So how do you deal with this issue? One way is to run a background thread that watches a queue for items that need to be written to disk. The thread can monitor the queue for a certain number of items before writing to disk. The alternative is to wait for certain other events to happen in the game where I/O is happening and save it then. You may need to analyse the size of events that you are saving and try different options.
You would want to get an estimate as to how much data is saved per screen, then decide how much of someone's memory you want to use, and then just divide, as you will have huge variances. I am using a 64 bit OS so how much you can store on my machine is different than on a 32-bit machine.
You may want to decide what is actually needed. For example, if you just save the properties of each object into a json array you may save yourself some space, but you want to limit how much you write to a disk, as that will need to be done on a separate thread that only writes to this file, so that you don't have two threads trying to access the same resource, queue up the writes.
Saving the music, for example, may not be useful, as that will change anyway, I expect.
Be very judicious about what you will save, try it and see if you are saving enough.

Resources