I was asked this in an interview and my feeling at first was that each 64 bit number will be stored in two 32 bit memory locations and then the system can do normal addition arithmetic based on its Endian rules. But I did not get confirmation on whether my answer was right or not and i have been having doubts. I know this might be very basic but I need to know. Thanks
If you don't have the overflow bit, then you have to add the 31 lowest bits together first, and then you can check if the 2^31 place overflows*. Then you set that bit correctly. Then you add the highest 32 bits together, add 1 if it overflowed in *. Overflow on the highest 32 bits are OK, and handles the two's complement arithmetic nicely.
My answer whould be:
Adding 2 numbers can result in at most 2 times the aximu, number. So adding 2 32 bit numbers can at most result in an 33 bit number. Using the overflow flag of the cpu this bit 33 can be used. By adding the most significant 32bit first and than the least significant 32 bit and checking the overflow bit to add 1 to the most significant 32 bit part.
Does this answer your question?
Related
My current question is related to Max length for a dynamic array in Delphi?. That question was asked in 2009 when the 64 bit compiler was not available. I am preparing migration to Delphi XE2 (or whatever version is available for purchase not) or to Lazarus because I need 64 bit support.
I would like to know what changed (related to dynamic array max length) in Delphi 64bit. Can I create bigger arrays now?
Dynamic array lengths are, in modern Delphi, NativeInt.
This means that dynamic arrays are limited in theory to 32 bit lengths in 32 bit code, and 64 bit length in 64 bit code. Of course, practical considerations mean that the limits are somewhat lower. However it is possible to allocate dynamic arrays with more than 232 elements in 64 bit code.
On the other hand, strings are subject to a 32 bit limit on their length for all architectures. As I understand it the reasoning is that strings are simply not expected to hold such large amounts of text. And many of the text support library functions that strings rely on use 32 bit lengths. Whereas arrays are used for more general purpose computing and a 32 bit limit would greatly reduce their utility under 64 bits.
I've been wondering this for a long time since I've never had "formal" education on computer science (I'm in highschool), so please excuse my ignorance on the subject.
On a platform that supports the three types of integers listed in the title, which one's better and why? (I know that every kind of int has a different length in memory, but I'm not sure what that means or how it affects performance or, from a developer's view point, which one has more advantages over the other).
Thank you in advance for your help.
"Better" is a subjective term, but some integers are more performant on certain platforms.
For example, in a 32-bit computer (referenced by terms like 32-bit platform and Win32) the CPU is optimized to handle a 32-bit value at a time, and the 32 refers to the number of bits that the CPU can consume or produce in a single cycle. (This is a really simplistic explanation, but it gets the general idea across).
In a 64-bit computer (most recent AMD and Intel processors fall into this category), the CPU is optimized to handle 64-bit values at a time.
So, on a 32-bit platform, a 16-bit integer loaded into a 32-bit address would need to have 16 bits zeroed out so that the CPU could operate on it; a 32-bit integer would be immediately usable without any alteration, and a 64-bit integer would need to be operated on in two or more CPU cycles (once for the low 32-bits, and then again for the high 32-bits).
Conversely, on a 64-bit platform, 16-bit integers would need to have 48 bits zeroed, 32-bit integers would need to have 32 bits zeroed, and 64-bit integers could be operated on immediately.
Each platform and CPU has a 'native' bit-ness (like 32 or 64), and this usually limits some of the other resources that can be accessed by that CPU (for example, the 3GB/4GB memory limitation of 32-bit processors). The 80386 processor family (and later x86) processors made 32-bit the norm, but now companies like AMD and then Intel are currently making 64-bit the norm.
To answer your first question, the usage of a 16 bit vs a 32 bit vs a 64 bit integer depends on the context that it is used. Therefore, you really can't say one is better over the other, per say. However, depending on a situation, using one over another is preferable. Consider this example. Let's say you have a database with 10 million users and you want to store the year they were born. If you create a field in your database with a 64 bit integer then you have exhausted 80 megabytes of your storage; whereas, if you were to use a 16 bit field, only 20 megabytes of your storage will get used. You can use a 16 bit field here because the year people are born is smaller than the largest 16 bit number. In other words 1980, 1990, 1991 < 65535, assuming your field is unsigned. All in all, it depends on the context. I hope this helps.
A simple answer is to use the smallest one you KNOW will be safe for the range of possible values it will contain.
If you know the possible values are constrained to be smaller than a maximum-length 16-bit integer (e.g. the value corresponding to what day of the year it is - always <= 366) then use that. If you aren't sure (e.g. the record ID of a table in a database that can have any number of rows) then use Int32 or Int64 depending on your judgment.
Other can probably give you a better sense of of the performance advantages depending on what programming language you are using, but the smaller types use less memory and hence are 'better' to use if you don't need larger.
Just for reference, a 16-bit integer means there are 2^16 possible values - generally represented as between 0 and 65,535. 32-bit values range from 0 to 2^32 - 1, or just over 4.29 billion values.
This question On 32-bit CPUs, is an 'integer' type more efficient than a 'short' type? may add some more good information.
It depends on whether speed or storage should be optimized. If you are interested in speed and you are running SQL Server in 64 bit mode then 64 bit keys are what you need. A 64 bit processor running in 64 bit mode, is optimized to use 64 bit numbers and addresses. Likewise, a 64 bit processor running in 32 bit mode is optimized to use 32 bit numbers and addresses. For example, in 64 bit mode, all pushes and pops onto the stack are 8 bytes etc. Also fetch from cache and memory are again optimized for 64 bit numbers and addresses. The processor, running in 64 bit mode, may need more machine cycles to handle a 32 bit number just like a processor, running in 32 bit mode needs more machine cycles to handle a 16 bit number. The increases in processing time come for many reasons, but just think about the example of memory alignment: The 32 bit number may not be aligned on a 64 bit integral boundary which means loading the number requires shifting and masking the number after loading it into a register. At the very least, every 32 bit number must be masked before each operation. We are talking at least halving the processor's effective speed while handling 32 or 16 bit integers in 64 bit mode.
To provide a simple explanation to novice programmers. A bit is either a 0 or a 1.
a 16 bit Int is an integer represented by a string of 16 bits (16 0's and 1's)
a 32 bit Int is an integer represented by a string of 32 bits (32 0's and 1's)
a 64 bit Int is an integer represented by a string of 64 bits (64 0's and 1's)
Examples to drive those concepts home:
an example of a 16-bit integer would be 0000000000000110 which equals the int 6
an example of a 32-bit integer would be 00000000000000000100001000100110 which equals the int 16934.
an example of a 64-bit integer would be 0000100010000000010000100010011000000000000000000100001000100110 which equals the int 612562280298594854.
You can represent a larger number of integers with 64 bits than you can 32 bits than you can 16 bits. So the benefit of using fewer bits is you save space on the machine. The benefit of using more bits is you can represent more integers.
Saying right now: Yes, this is homework. I'm not asking for an answer, but I would love any help into a general direction to look at this problem at. I've been working on it now for hours and have not made any real progress.
Can a function, with a well defined inverse, be implemented to map 32 bit integers to 64 bit integers. Do all functions from 32bit to 64bit integers have well defined inverses?
Of course not.
Take the identity function for example. All 32-bit values have an identity in the 64-bit value space (just use 0 in the top 32 bits, using only the bottom 32 bits for the value). However, any 64-bit value where the top 32 bits is not 0, will not have a corresponding value in the 32-bit value space.
The above is a layman's explanation, and is probably not rigorous enough as a homework solution (as intended). You'd do well to read up on the pigeonhole principle.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Somebody has confirmed there are 8 bits in every location/address in memory.
Can I know why? is it related to the memory chip architecture? Or is it because of 32bit CPU. Is this 8 bits true for another OS such as FreeBSD, Mac, Linux?
Is there any relation the amount of bits in every location to the count of address line in memory?
Is there any other architecture that has different amount of bits per address?
As is often the case, Wikipedia has an answer:
Architectures that did not have eight-bit bytes include the CDC 6000 series scientific mainframes that divided their 60-bit floating-point words into 10 six-bit bytes. These bytes conveniently held character data from punched Hollerith cards, typically the upper-case alphabet and decimal digits. CDC also often referred to 12-bit quantities as bytes, each holding two 6-bit display code characters, due to the 12-bit I/O architecture of the machine. The PDP-10 used assembly instructions LDB and DPB to load and deposit bytes of any width from 1 to 36-bits—these operations survive today in Common Lisp. Bytes of six, seven, or nine bits were used on some computers, for example within the 36-bit word of the PDP-10. The UNIVAC 1100/2200 series computers (now Unisys) addressed in both 6-bit (Fieldata) and nine-bit (ASCII) modes within its 36-bit word. Telex machines used 5 bits to encode a character.
Factors behind the ubiquity of the eight bit byte include the popularity of the IBM System/360 architecture, introduced in the 1960s, and the 8-bit microprocessors, introduced in the 1970s. The term octet unambiguously specifies an eight-bit byte (such as in protocol definitions, for example).
It's a hardware architecture, not OS, thing. All architectures that you're going to run into, day to day, use eight bits per byte. There may be modern exceptions, particularly in the extremes (mainframe, super, embedded), but I'm not aware of any.
A memory address is the location of a specific byte in memory.
A byte has 8 bits.
In a way it's totally arbitrary - but 8 bits is convenient.
It holds 256 values so it's large enough to store all the upper and lower case letters, numbers and symbols plus it's divisible by 2, and 4 which is handy for a few things.
Some DSP architectures use 16-bit addressable units. On many PIC microcontrollers, code memory is accessed in multiples of the instruction size (12 or 14 bits). I think it's useful to have a data size which is a power of two, and which can efficiently store character data. Using 16 bits to hold characters would historically have been considered wasteful (IMHO, in many cases, it still is), and 4 bits is too small a chunk size to be useful.
I am preparing for a quiz in my computer science class, but I am not sure how to find the correct answers. The questions come in 4 varieties, such as--
Assume the following system:
Auxiliary memory containing 4 gigabytes,
Memory block equivalent to 4 kilobytes,
Word size equivalent to 4 bytes.
How many words are in a block,
expressed as 2^_? (write the
exponent)
What is the number of bits needed to
represent the address of a word in
the auxiliary memory of this system?
What is the number of bits needed to
represent the address of a byte in a
block of this system?
If a file contains 32 megabytes, how
many blocks are contained in the
file, expressed as 2^_?
Any ideas how to find the solutions? The teacher hasn't given us any examples with solutions so I haven't been able to figure out how to do this by working backwards or anything and I haven't found any good resources online.
Any thoughts?
Questions like these basically boil down to working with exponents and knowing how the different pieces fit together. For example, from your sample questions, we would do:
How many words are in a block, expressed as 2^_? (write the exponent)
From your description we know that a word is 4 bytes (2^2 bytes) and that a block is 4 kilobytes (2^12 bytes). To find the number of words in one block we simply divide the size of a block by the size of a word (2^12 / 2^2) which tells us that there are 2^10 words per block.
What is the number of bits needed to represent the address of a word in the auxiliary memory of this system?
This type of question is essentially an extension of the previous one. First you need to find the number of words contained in the memory. And from that you can get the number of bits required to represent a word in the memory. So we are told that memory contains 4 gigabytes (2^32 bytes) and that the word is 4 bytes (2^2 bytes); therefore the number words in memory is 2^32/2^2 = 2^30 words. From this we can deduce that 30 bits are required to represent a word in memory because each bit can represent two locations and we need 2^30 locations.
Since this is tagged as homework I will leave the remaining questions as exercises :)
Work backwards. This is actually pretty simple mathematics. (Ignore the word "auxilliary".)
How much is a kilobyte? How much is 4 kilobytes? Try putting in some numbers in 2^x, say x == 4. How much is 2^4 words? 2^8?
If you have 4GB of memory, what is the highest address? How large numbers can you express with 8 bits? 16 bits? Hint: 4GB is an even power of 2. Which?
This is really the same question as 2, but with different input parameters.
How many kilobytes is a megabyte? Express 32 megabytes in kilobytes. Division will be useful.