The direction of stack growth and heap growth - memory

In some systems, the stack grows in upward direction whereas the heap grows in downward direction and in some systems, the stack grows in downward direction and the heap grows in upward direction. But, Which is the best design ? Are there any programming advantages for any of those two specific designs ? Which is most commonly used and why has it not been standardized to follow a single approach ? Are they helpful/targeted for certain specific scenarios. If yes, what are they ?

Heaps only "grow" in a direction in very naive implementations. As Paul R. mentions, the direction a stack grows is defined by the hardware - on Intel CPUs, it always goes toward smaller addresses "i.e. 'Up'"

I have read the works of Miro Samek and various other embedded gurus and It seems that they are not in favor of dynamic allocation on embedded systems. That is probably due to complexity and the potential for memory leaks. If you have a project that absolutely can't fail, you will probably want to avoid using Malloc thus the heap will be small. Other non mission critical systems could be just the opposite. I don't think there would be a standard approach.

Maybe it is just dependent on the processor: If it supports the stack going upward or downward?

Related

Particle Swarm Optimisation: Converges to local optima too quickly in high dimension space

In a portfolio optimisation problem, I have a high dimension (n=500) space with upper and lower bounds of [0 - 5,000,000]. With PSO I am finding that the solution converges quickly to a local optima rather and have narrowed down the problem to a number of areas:
Velocity: Particle velocity rapidly decays to extremely small step sizes [0-10] in the context of the upper/lower bounds [0 - 5,000,000]. One plug I have found is that I could change the velocity update function to a binary step size [e.g. 250,000] by using a sigmoid function but this clearly is only a plug. Any recommendations on how to motivate the velocity to remain high?
Initial Feasible Solutions: When initialising 1,000 particles, I might find that only 5% are feasible solutions in the context of my constraints. I thought that I could improve the search space by re-running the initialisation until all particles start off in a feasible space but it turns out that this actually results in a worse performance and all the particles just stay stuck close to their initialisation vector.
With respect to my paremeters, w1=c1=c2=0.5. Is this likely to be the source of both problems?
I am open to any advice on this as in theory it should be a good approach to portfolio optimisation but in practice i am not seeing this.
Consider changing the parameters. Using w=0.5 'stabilizes' the particle and thus, preventing escape from local optima because it already converges. Furthermore, I would suggest to put the value of c1 and c2 to become larger than 1 (I think 2 is the suggested value), and maybe modify the value for c1 (Tendency to move toward global best) slightly smaller than c2 to prevent overcrowding on one solution.
Anyway, have you tried to do the PSO with a larger amount of particles? People usually use 100-200 particles to solve 2-10 dimensional problem. I don't think 1,000 particles in 500 dimensional space will cut it. I would also suggest to use more advanced initialization method instead of normal or uniform distribution (e.g. chaotic map, Sobol sequence, Latin Hypercube sampling).

Are there any downsides of using satellite view in mapkit?

I wonder if there any downsides of using satellite mode in MKMapView?
If it performing as good as the standard map type? Maybe it devours more RAM or downloads more data?
I'm asking because this would be a much better solution in my app to use only satelite view, but I'd like to know if there are any consequences in advance.
As I check it right now, I cannot see any performance decrease comparing to standard mapView type. However, I believe that my use case is pretty basic at the moment and probably some issues I cannot detect this way.
So my questions is about known issues with performance using satelite view.
EDIT
I played(zoomed, jump all over the world etc) with both satelite and standard map and it turns out that satelite consumes less memory than standard one. How come?
Based on doing map tile (256 X 256) captures for offline use, satellite and hybrid map tiles average around 90K Bytes each in rural areas while standard map tiles average about 10K bytes each in those same areas, so there is a major impact on the volume of data downloaded and therefore on the time required. Note that there is fairly wide variance in the sizes from tile to tile depending on content, though the ratio stays pretty close.

Optimize OpenGL ES 2.0 drawing iOS

I have this huge model(helix) created with 2 million vertices at once and some million more indices for which vertices to use.
I am pretty sure this is a very bad way to draw so many vertices.
I need some hints to where I should start to optimize this?
I thought about copying 1 round of my helix (vertices) and moving the z of that. But in the end, I would be drawing a lot of triangles at once again...
How naive are you currently being? As per rickster's comment, there's a serious case of potential premature optimisation here: the correct way to optimise is to find the actual bottlenecks and to widen those.
Knee-jerk thoughts:
Minimise memory bandwidth. Pack your vertices into the smallest space they can fit into (i.e. limit precision where it is acceptable to do so) and make sure all the attributes that describe a single vertex are contiguously stored (i.e. the individual arrays themselves will be interleaved).
Consider breaking your model up to achieve that aim. Instanced drawing as rickster suggests is a good idea if it's sufficiently repetitive. You might also consider what you can do with 65536-vertex segments, since that'll cut your index size.
Use triangle strips if it allows you to specify the geometry in substantially fewer indices, even if you have to add degenerate triangles.
Consider where the camera will be. Do you really need that level of detail all the way around? Will the whole thing even ever be on screen? If not then consider level-of-detail solutions and subdivision for culling (both outside the viewport and within via the occlusion query).

Balance box2d objects

Please check the attach image it's a kind of seesaw. But as from image the black bodies have same density. And the horizontal rectangle is attached with the triangle using "Revolute" joint. But still not working any suggestion. in the current situation it's need to be balanced.
Due to tiny imbalances in the layout caused by the limitations of floating point precision etc, it's highly unlikely that this will ever balance in the middle reliably (just like real life). One thing you could try is to give the beam some angular damping, which would make it less easy to swivel around, so it would slow down quicker and sleep earlier. That might be enough to get it to come to rest without falling to one side or the other.
I don't think it is the problem of floating point precision. At least it can't appear that fast. As far as I know Box2D resolves contacts (including resting contacts) one by one. It is much faster than simultaneous contact resolve, but less precise at the same time since resolving one contact create impact on the others.
I would try adding a motor with a small maximum torque to your revolute joint and controlling it's speed to balance the system.

glGenTextures speed and memory concerns

I am learning OpenGL and recently discovered about glGenTextures.
Although several sites explain what it does, I feel forced to wonder how it behaves in terms of speed and, particularly, memory.
Exactly what should I consider when calling glGenTextures? Should I consider unloading and reloading textures for better speed? How many textures should a standard game need? What workarounds are there to get around any limitations memory and speed may bring?
According to the manual, glGenTextures only allocates texture "names" (eg ids) with no "dimensionality". So you are not actually allocating texture memory as such, and the overhead here is negligible compared to actual texture memory allocation.
glTexImage will actually control the amount of texture memory used per texture. Your application's best usage of texture memory will depend on many factors: including the maximum working set of textures used per frame, the available dedicated texture memory of the hardware, and the bandwidth of texture memory.
As for your question about a typical game - what sort of game are you creating? Console games are starting to fill blu-ray disk capacity (I've worked on a PS3 title that was initially not projected to fit on blu-ray). A large portion of this space is textures. On the other hand, downloadable web games are much more constrained.
Essentially, you need to work with reasonable game design and come up with an estimate of:
1. The total textures used by your game.
2. The maximum textures used at any one time.
Then you need to look at your target hardware and decide how to make it all fit.
Here's a link to an old Game Developer article that should get you started:
http://number-none.com/blow/papers/implementing_a_texture_caching_system.pdf

Resources