The current versions of Lua don't support integer numbers, only floats. (With the upcoming 5.3 this is changing, but let's ignore this.)
So, my question is: what use there is in lua_pushinteger()? If the numbers get cast into a float, why not use lua_pushnumber() directly?
(Please don't answer "for future compatibility with 5.3", which is a good answer for today but is an answer that otherwise doesn't satisfy my curisoty: integer support wasn't expected in the old days. I want to know the reason lua_pushinteger() was introduced in the first place, not apologetic in hindsight.)
The explicit handling of integers in the API was introduced for documentation, performance, and correctness.
Concentrating the handling of integers explicitly inside the API allows the core to use the best conversion to and from floats; in some platforms, this can be costly if done naively. It also allows the core to check for overflow, though Lua 5.1 and 5.2 did not check this.
Related
I have an iOS app that I'm trying to update to take advantage of the A7 64-bit processor.
In many parts of my code, I created integer variables of type int.
Should I change all of them to NSInteger ?
I read many articles on the subject, and I assume I should, but how using extra memory for no reason is good? I mean, all my integer variables will never hold a number greater than 100...
Thanks (:
I need an int128 (and/or int256).
Is there a library or way in which I can use that in Delphi?
Note that I do not want to muck around with strings and such, support as close as possible to int64 would be ideal.
There's BigInteger, but this calls a dll to do its work, which is not acceptable.
I remember there being another library for big numbers, but I cannot remember the name...
OK, found it at: http://sourceforge.net/projects/bigint-dl/
BigInt is the Delphi library providing operations with extremely large integer numbers, known as multi-precision arithmetics. Our primary goal is to achieve maximum performance of calculations.
The sourcecode is nicely documented in Chinese :-(
It uses mostly x86 32bit assembly (no MMX etc, which is a pity).
This is an open source unit that I have used in the past for math with 'unlimited' sized integers:
http://www.bvbcode.com/code/b1uxniwl-1626766
Would that be what you were looking for?
Ps
I am on my phone now. If this is helpful I will improve the formatting later.
I was reading this and I am curious about what was meant by increasing the memory footprint. I am not an expert in any of this, by any means. I actually know very little, other than what I've come up with thinking about how systems work. If someone could help clarify my thoughts and correct me where I'm wrong, I would really appreciate it.
I know that by using the proper typedefs, I am future-proofing my code in case apple changes the structure of the typedef and using typedefs shouldn't affect the processor, since its the compiler's or preprocessor's job to basically convert them. But will it actually use any more memory than is necessary, if the typedefs are only used for functions that expect them (and their precision), such as CGRect/CGSize/etc and NSDate functions that ask for those typedefs?
Basically, is there any EXTRA memory being used, given that they are only being used in situations where functions ask for them, rather than using their current counterparts (CGFloat -> float)?
This is for iOS vs OSX, since I know that OSX has both 32bit and 64bit processors and the typedefs are expected.
Think of it this way. Memory footprint often means how much memory you are consuming at any time. If you without any reason use 64 bit values instead of perfectly useful 32 bit ones, then there is some marginal inflation. That said, I'll bet most of your usage is in automatics and object ivars.
On iOS now, CGFloat == float.
I personally ALWAY use CGFloat for anything that might interface with iOS - that is, unless I'm doing some math functions. And for exactly as you said. The other day I had to grab some code on iOS and move it to a Mac app, and it took almost not time (as I use CGFlat, NSInteger, and friends). You will get no conversion warnings (ie moving 64 bit values into 32 bit ones).
In the future, given the popularity of iOS, its quite likely that there will be processors using 64 bit floating point and integers. Its the nature of progress. If you use the CGFloat and friends, your code will compile without warnings on a universal app the does both 32 and 64 bit.
If Apple uses CGFloat, why would you be concerned about it? Use the types that match the api calls which you are calling. If CGFloat was a memory problem our phones would all be crashing.
The datatype char does not appear in the Data Architect for version 10. It now shows up as character.
I cannot find anything in the documentation for this type.
I have spent months developing a WCF Custom Adapter for the Advantage Database. Now I am getting data type exceptions because of the CHARACTER data type.
Does anyone know of any other undocumented modifcations to the MetaData?
CHAR and Character are the same thing. I believe the difference you are seeing in ARC 10 vs earlier versions of ARC (I am assuming here, sorry) was a cosmetic bug fix. In ARC 9.1 for example, if you click the drop down it was listed as Character.
Using sp_getColumns stored procedure I see that the type is returned the same for both ADS 10.0 and ADS 9.1 (both are CHAR).
What sort of errors are you getting as far as data type exceptions?
I'm working with a team for a bigger application with Delphi 2007. It use a bigger legacy framework to access the data. Both the app and framework use String as datatype for strings. I have started to modify the code in framework to support Delphi 2009 strings, see my previous questions about this.
I see 2 alternatives now:
Alt 1 - Continue to use string as before. This is probably the cleanest solution as the framework will then supports Unicode. But the code in framework must be modified a lot to make this working. This require in depth understanding of the internal algorithms in framework. It is also a bigger chance to introduce new bugs.
Alt 2 - Replace String with AnsiString and Char with AnsiChar. This is propably a much easier solution and also how I start to modify the code (but then I start thinking and ask this question...). The negative side of this is no support for Unicode. Unicode support is not a requirement as it worked before but is nice to have. It could also be useful in the future. Another problem is that the application must send Ansistring variables as parameters in the methods for the framework instead of String as before. There are thousands of calls to change...
So I don't know right now. Both options require a lot of work, but Alt 1 is probably more risky and time consuming. What I want from this forum is feedback and comments as I guess I am not the first who have this problem.
EDIT
Another issue is the memory footprint. I wrote a quick test that allocate an array of one million strings. Each string was filled with 26 chars from A to Z.
With Delphi 2007 it took 40.011.600 bytes and the time was 4:15 minutes.
With Delphi 2009 it took 72.015.580 bytes and the time was 4:45 minutes.
The memory consumption was measured with GetHeapStatus.TotalAllocated.
I don't think we can afford to have the strings allocate twice as much memory.
It is not unusual to have 500 MB in memory consumption for each client now. I guess much of this are as strings. Propably we try to use AnsiString as much as possible.
Regards
Either stay with the old version of Delphi, or go all the way. You'll have to sooner and later anyway.
Note that the "replace everything with ansistring" scheme is also not entirely foolproof, specially if you touch streams and your fileformats need to stay the same. There are no explicite TStringlists,tstringstreams etc with ansistring anymore.
The same probably goes for Datasnap, Indy and other frameworks.
You can try to use this trick for certain string intensive parts at first, to avoid changing too much code directly. E.g. I had an own XML library, which I patched to remain mostly ansistring. The library was only used sideways, and unicode was of no importance to it.
Start with "alt 2", then gradually add unicode support to your framework, then move over to Unicode.
Rationale: you want a stable app; switching over to Delphi 2009+ will eventually require you to really support Unicode.
Edit: 20100125
While doing "alt 2" watch the Delphi compiler hints warnings.
The situation that Andreas describes will generate such hints and warnings.
I have explained this in my CodeRage 4 session about Unicode and other encodings.
The above link points to a page where you can view the replay of that session.
If you still have questions, just drop them here.
--jeroen
We evaluated the transition 2007 -> 2009 a year ago and tried a a smaller project (200k lines). The result was that everywhere where you do not use "fancy" things like pointers, set of char etc the porting is really not that difficult . Especially the GUI units we're ported within a day or so. This is equivalent to opt1.
The library units with low level routines, access to measurement systems etc etc was a whole different story. Here we choose to translate string -> ansistring, char -> ansichar etc etc. Porting these units is a pain to get correct and the customer won't pay for the transition. Hence opt2 for those units.
This mixed method gave us best of both worlds but we will keep some larger projects at Delphi 2007 and probably only port when a 64 bit version of the compiler will come out.
It'll be more work, but I'd really recommend that you upgrade to Unicode strings, because that's the native string type of the VCL and so all your controls will be dealing with Unicode strings anyway. Trying to convert everything back and forth will cause you all sorts of hassles.