Updating app to support 64-bit - ios

I have an iOS app that I'm trying to update to take advantage of the A7 64-bit processor.
In many parts of my code, I created integer variables of type int.
Should I change all of them to NSInteger ?
I read many articles on the subject, and I assume I should, but how using extra memory for no reason is good? I mean, all my integer variables will never hold a number greater than 100...
Thanks (:

Related

Is unaligned memory access allowed on iOS devices?

I'm currently working on app that loads blob of tightly packed data which contains different integer types (sized from char to int) that might not be properly aligned.
So, can I use simple *(short*)ptr or similar accesses to that data? Test on my iphone 5 shows no problem with that, but I'm not sure about all cases on all newer processors.
I did find some related informations, like this:
ARMv6 and later, except some microcontroller versions, support unaligned accesses for half-word and single-word load/store instructions with some limitations, such as no guaranteed atomicity.
but in case of words it seems that on 32-bit and 64-bit ARMs word 32 and 64 bit accordingly, which would mean short requires proper alignment on 64-bit machine.
So, can I assume this is safe, or should I use some keywords like __packed?
Or should I rather avoid it completely and recreate my data so it always have proper alignment (or always use memmove when data is from external source and cannot by permanently modified)?
It's ages ago that I tried it. And it worked, but every single access to unaligned memory caused a trap, which took considerable time. I'd suggest you measure how long it takes to add a million aligned shorts vs a million unaligned shorts. If you have a few hundred or thousand unaligned numbers, nothing to worry about.
__packed works reasonably fast. ARM has some clever instructions to do unaligned access with very few instructions. Again, I'd measure how long that takes. My experience with this is not current.

Why should we use lua_pushinteger()?

The current versions of Lua don't support integer numbers, only floats. (With the upcoming 5.3 this is changing, but let's ignore this.)
So, my question is: what use there is in lua_pushinteger()? If the numbers get cast into a float, why not use lua_pushnumber() directly?
(Please don't answer "for future compatibility with 5.3", which is a good answer for today but is an answer that otherwise doesn't satisfy my curisoty: integer support wasn't expected in the old days. I want to know the reason lua_pushinteger() was introduced in the first place, not apologetic in hindsight.)
The explicit handling of integers in the API was introduced for documentation, performance, and correctness.
Concentrating the handling of integers explicitly inside the API allows the core to use the best conversion to and from floats; in some platforms, this can be costly if done naively. It also allows the core to check for overflow, though Lua 5.1 and 5.2 did not check this.

Is there an int128?

I need an int128 (and/or int256).
Is there a library or way in which I can use that in Delphi?
Note that I do not want to muck around with strings and such, support as close as possible to int64 would be ideal.
There's BigInteger, but this calls a dll to do its work, which is not acceptable.
I remember there being another library for big numbers, but I cannot remember the name...
OK, found it at: http://sourceforge.net/projects/bigint-dl/
BigInt is the Delphi library providing operations with extremely large integer numbers, known as multi-precision arithmetics. Our primary goal is to achieve maximum performance of calculations.
The sourcecode is nicely documented in Chinese :-(
It uses mostly x86 32bit assembly (no MMX etc, which is a pity).
This is an open source unit that I have used in the past for math with 'unlimited' sized integers: 
http://www.bvbcode.com/code/b1uxniwl-1626766
Would that be what you were looking for?
Ps
I am on my phone now. If this is helpful I will improve the formatting later.

Using CGFloat and "memory footprint"

I was reading this and I am curious about what was meant by increasing the memory footprint. I am not an expert in any of this, by any means. I actually know very little, other than what I've come up with thinking about how systems work. If someone could help clarify my thoughts and correct me where I'm wrong, I would really appreciate it.
I know that by using the proper typedefs, I am future-proofing my code in case apple changes the structure of the typedef and using typedefs shouldn't affect the processor, since its the compiler's or preprocessor's job to basically convert them. But will it actually use any more memory than is necessary, if the typedefs are only used for functions that expect them (and their precision), such as CGRect/CGSize/etc and NSDate functions that ask for those typedefs?
Basically, is there any EXTRA memory being used, given that they are only being used in situations where functions ask for them, rather than using their current counterparts (CGFloat -> float)?
This is for iOS vs OSX, since I know that OSX has both 32bit and 64bit processors and the typedefs are expected.
Think of it this way. Memory footprint often means how much memory you are consuming at any time. If you without any reason use 64 bit values instead of perfectly useful 32 bit ones, then there is some marginal inflation. That said, I'll bet most of your usage is in automatics and object ivars.
On iOS now, CGFloat == float.
I personally ALWAY use CGFloat for anything that might interface with iOS - that is, unless I'm doing some math functions. And for exactly as you said. The other day I had to grab some code on iOS and move it to a Mac app, and it took almost not time (as I use CGFlat, NSInteger, and friends). You will get no conversion warnings (ie moving 64 bit values into 32 bit ones).
In the future, given the popularity of iOS, its quite likely that there will be processors using 64 bit floating point and integers. Its the nature of progress. If you use the CGFloat and friends, your code will compile without warnings on a universal app the does both 32 and 64 bit.
If Apple uses CGFloat, why would you be concerned about it? Use the types that match the api calls which you are calling. If CGFloat was a memory problem our phones would all be crashing.

I am having Issues with data types not appearing in the data architect and would like to know what could be causing this

The datatype char does not appear in the Data Architect for version 10. It now shows up as character.
I cannot find anything in the documentation for this type.
I have spent months developing a WCF Custom Adapter for the Advantage Database. Now I am getting data type exceptions because of the CHARACTER data type.
Does anyone know of any other undocumented modifcations to the MetaData?
CHAR and Character are the same thing. I believe the difference you are seeing in ARC 10 vs earlier versions of ARC (I am assuming here, sorry) was a cosmetic bug fix. In ARC 9.1 for example, if you click the drop down it was listed as Character.
Using sp_getColumns stored procedure I see that the type is returned the same for both ADS 10.0 and ADS 9.1 (both are CHAR).
What sort of errors are you getting as far as data type exceptions?

Resources