I was reading this and I am curious about what was meant by increasing the memory footprint. I am not an expert in any of this, by any means. I actually know very little, other than what I've come up with thinking about how systems work. If someone could help clarify my thoughts and correct me where I'm wrong, I would really appreciate it.
I know that by using the proper typedefs, I am future-proofing my code in case apple changes the structure of the typedef and using typedefs shouldn't affect the processor, since its the compiler's or preprocessor's job to basically convert them. But will it actually use any more memory than is necessary, if the typedefs are only used for functions that expect them (and their precision), such as CGRect/CGSize/etc and NSDate functions that ask for those typedefs?
Basically, is there any EXTRA memory being used, given that they are only being used in situations where functions ask for them, rather than using their current counterparts (CGFloat -> float)?
This is for iOS vs OSX, since I know that OSX has both 32bit and 64bit processors and the typedefs are expected.
Think of it this way. Memory footprint often means how much memory you are consuming at any time. If you without any reason use 64 bit values instead of perfectly useful 32 bit ones, then there is some marginal inflation. That said, I'll bet most of your usage is in automatics and object ivars.
On iOS now, CGFloat == float.
I personally ALWAY use CGFloat for anything that might interface with iOS - that is, unless I'm doing some math functions. And for exactly as you said. The other day I had to grab some code on iOS and move it to a Mac app, and it took almost not time (as I use CGFlat, NSInteger, and friends). You will get no conversion warnings (ie moving 64 bit values into 32 bit ones).
In the future, given the popularity of iOS, its quite likely that there will be processors using 64 bit floating point and integers. Its the nature of progress. If you use the CGFloat and friends, your code will compile without warnings on a universal app the does both 32 and 64 bit.
If Apple uses CGFloat, why would you be concerned about it? Use the types that match the api calls which you are calling. If CGFloat was a memory problem our phones would all be crashing.
Related
How can I search for specific static memory address in iOS games some thing like amount of damege and other Thing that don't have vaule in Gameing
I use Ida put its not helpful cause it not show you the statice memory address
Assuming you're trying to hack existing games by twiddling bytes similar to what Game Genie did on old video game systems, good luck with that. Besides the fact that iOS apps use dynamic memory allocation for pretty much everything, iOS also incorporates address space layout randomization, which means even ostensibly static storage probably won't be in the same location in consecutive launches.
The only approach that would even stand a chance of working would involve injecting code that performs introspection of the classes using Objective-c runtime calls. And even if you do that, there's no guarantee that such hacks will be possible or practical in iOS.
I'm currently working on app that loads blob of tightly packed data which contains different integer types (sized from char to int) that might not be properly aligned.
So, can I use simple *(short*)ptr or similar accesses to that data? Test on my iphone 5 shows no problem with that, but I'm not sure about all cases on all newer processors.
I did find some related informations, like this:
ARMv6 and later, except some microcontroller versions, support unaligned accesses for half-word and single-word load/store instructions with some limitations, such as no guaranteed atomicity.
but in case of words it seems that on 32-bit and 64-bit ARMs word 32 and 64 bit accordingly, which would mean short requires proper alignment on 64-bit machine.
So, can I assume this is safe, or should I use some keywords like __packed?
Or should I rather avoid it completely and recreate my data so it always have proper alignment (or always use memmove when data is from external source and cannot by permanently modified)?
It's ages ago that I tried it. And it worked, but every single access to unaligned memory caused a trap, which took considerable time. I'd suggest you measure how long it takes to add a million aligned shorts vs a million unaligned shorts. If you have a few hundred or thousand unaligned numbers, nothing to worry about.
__packed works reasonably fast. ARM has some clever instructions to do unaligned access with very few instructions. Again, I'd measure how long that takes. My experience with this is not current.
I'm working on an iPhone app that involves certain physics calculations that are done thousands of times per second. I am working on optimizing the code to improve the framerate. One of the pieces that I am looking at improving is the inverse square root. Right now, I am using the Quake 3 fast inverse square root method. After doing some research, however, I heard that there is a faster way by using the NEON instruction set. I am unfamiliar with inline assembly and cannot figure out how to use NEON. I tried implementing the math-neon library but I get compiler errors because most of the NEON-based functions lack return.
EDIT: I've suddenly been getting some "unclear question" close votes. Although I think its quite clear and those who answered obviously understood, maybe some people need it stated explicitly:
How do you use Neon to perform faster calculations? And is it really the fastest method for getting the inverse square root on the iPhone?
EDIT: I did some more formal testing on Neon VS Quake today, but If anything, I'm even more uncertain about the outcome now:
In-App Testing: (An app that is currently in the app store with its invsqrt method modified)
Quake Method (leading by a marginal increase in average FPS under stressful conditions)
Neon (It was a really close call but it seemed that Quake was slightly faster)
1/sqrtf() (a bit more noticeable difference, 1-3 FPS drop).
"Formal" Testing (An app that devours my Phone's CPU. Times how long it takes each method to get through an array of 10000000 randomly generated floats)
Neon (clearly the fastest, and double the speed if it is used to do two sqrts at once).
1/sqrtf() (Only marginally slower than Neon. This surprising result leads me to deem this test "inconclusive" until I investigate further)
Quake (This method, surprisingly, was a few orders of magnitude slower than the other two methods. This is especially surprising given its performance in the other test.)
While quake vs neon was too close to say anything for sure in the app performance test, the quake vs 1/sqrtf() was quite clearly cut out in the first test, and the second test was extremely consistent with the values it outputted. What is important in the end, though, is app performance, so I'm going to make my final decision based on that test.
The accepted answer of the question you've linked already provides the answer, but doesn't spell it out:
#import <arm_neon.h>
void foo() {
float32x2_t inverseSqrt = vrsqrte_f32(someFloat);
}
Header and function are already provided by the iOS SDK.
https://code.google.com/p/math-neon/source/browse/trunk/math_sqrtf.c <- there's a neon implementation of invsqrt there, you should be able to copy the assembly bit as-is
I'm working with a team for a bigger application with Delphi 2007. It use a bigger legacy framework to access the data. Both the app and framework use String as datatype for strings. I have started to modify the code in framework to support Delphi 2009 strings, see my previous questions about this.
I see 2 alternatives now:
Alt 1 - Continue to use string as before. This is probably the cleanest solution as the framework will then supports Unicode. But the code in framework must be modified a lot to make this working. This require in depth understanding of the internal algorithms in framework. It is also a bigger chance to introduce new bugs.
Alt 2 - Replace String with AnsiString and Char with AnsiChar. This is propably a much easier solution and also how I start to modify the code (but then I start thinking and ask this question...). The negative side of this is no support for Unicode. Unicode support is not a requirement as it worked before but is nice to have. It could also be useful in the future. Another problem is that the application must send Ansistring variables as parameters in the methods for the framework instead of String as before. There are thousands of calls to change...
So I don't know right now. Both options require a lot of work, but Alt 1 is probably more risky and time consuming. What I want from this forum is feedback and comments as I guess I am not the first who have this problem.
EDIT
Another issue is the memory footprint. I wrote a quick test that allocate an array of one million strings. Each string was filled with 26 chars from A to Z.
With Delphi 2007 it took 40.011.600 bytes and the time was 4:15 minutes.
With Delphi 2009 it took 72.015.580 bytes and the time was 4:45 minutes.
The memory consumption was measured with GetHeapStatus.TotalAllocated.
I don't think we can afford to have the strings allocate twice as much memory.
It is not unusual to have 500 MB in memory consumption for each client now. I guess much of this are as strings. Propably we try to use AnsiString as much as possible.
Regards
Either stay with the old version of Delphi, or go all the way. You'll have to sooner and later anyway.
Note that the "replace everything with ansistring" scheme is also not entirely foolproof, specially if you touch streams and your fileformats need to stay the same. There are no explicite TStringlists,tstringstreams etc with ansistring anymore.
The same probably goes for Datasnap, Indy and other frameworks.
You can try to use this trick for certain string intensive parts at first, to avoid changing too much code directly. E.g. I had an own XML library, which I patched to remain mostly ansistring. The library was only used sideways, and unicode was of no importance to it.
Start with "alt 2", then gradually add unicode support to your framework, then move over to Unicode.
Rationale: you want a stable app; switching over to Delphi 2009+ will eventually require you to really support Unicode.
Edit: 20100125
While doing "alt 2" watch the Delphi compiler hints warnings.
The situation that Andreas describes will generate such hints and warnings.
I have explained this in my CodeRage 4 session about Unicode and other encodings.
The above link points to a page where you can view the replay of that session.
If you still have questions, just drop them here.
--jeroen
We evaluated the transition 2007 -> 2009 a year ago and tried a a smaller project (200k lines). The result was that everywhere where you do not use "fancy" things like pointers, set of char etc the porting is really not that difficult . Especially the GUI units we're ported within a day or so. This is equivalent to opt1.
The library units with low level routines, access to measurement systems etc etc was a whole different story. Here we choose to translate string -> ansistring, char -> ansichar etc etc. Porting these units is a pain to get correct and the customer won't pay for the transition. Hence opt2 for those units.
This mixed method gave us best of both worlds but we will keep some larger projects at Delphi 2007 and probably only port when a 64 bit version of the compiler will come out.
It'll be more work, but I'd really recommend that you upgrade to Unicode strings, because that's the native string type of the VCL and so all your controls will be dealing with Unicode strings anyway. Trying to convert everything back and forth will cause you all sorts of hassles.
I am looking for some advice on memory usage on mobile devices, BlackBerry in particular. Using some profiling tools we have calculated a working set size in RAM of 525kb. Problem is we don't really know whether this is acceptable or too high ?
Can anyone give any insight into their own experience with memory usage on BlackBerry? What sort of number should we be aiming for?
I am also wondering what sort of things we should be looking out for in particular to reduce memory usage.
512KB is perfectly acceptable on the current generation of BlackBerrys devices. You can take a look at JBenchmark to see the exact JVM heap you can expect for each model, but none of the current devices out there go below 20MB of heap. Most are much larger than that.
On JBenchmark you can choose the device you are interested from a drop down on the right side of the page. Then, navigate to the JVM Tab for the device.
When it comes to reducing memory usage I wouldn't worry about the total bytes used for this application if you are truly inline with 525K, just about how often allocation/reallocation is required. Try to pool/reuse objects as much as possible, avoiding any unneeded allocation. For instance, use the StringBuffer class to concatenate strings instead of operators as multiple String objects will be created for each concatenation using the operator, where a StringBuffer will just put the characters in an array and only expand when needed. Google is a good way to find more tips.
Finally, relying on profiling tools, which the BlackBerry JDE has, is a very important part of understanding exactly how you can optimize heap memory usage.
If I'm not mistaken, Blackberry apps are written in Java... which is a managed environment, which means really the only surefire way to use less memory is to create fewer objects. There's not a whole lot you can do about your working set, I think, since it's managed by the runtime (which is actually probably the point of using Java on devices like this).