How to produce these references using latex ( journal name : Journal of Mathematical Analysis and Applications)
[1] S. Anita, Analysis and Control of Age-Depended Population Dynamics, Kluwer Academic Publishers, 2000.
[2]E. Barucci, F. Gozzi, Investment in a vintage capital model, Res. Econ. 52 (1998) 159–188.
[3]R. Boucekkine, N. Hritonenko, Y. Yatsenko (Eds.), Optimal Control of Age-Structured Populations in Economy, Demog-raphy, and the Environment, Routledge, 2013.
[4]M. Brokate, Pontryagin’s principle for control problems in age-dependent population dynamics, J. Math. Biol. 23 (1985) 75–101.
[5]A.V. Dmitruk, N.P. Osmolovskii, Necessary conditions for a weak minimum in optimal control problems with integral equations subject to state and mixed constraints, SIAM J. Control Optim. 52 (2014) 3437–3462.
[6]A.Ya. Dubobitskii, A.A. Milyutin, Necessary conditions for a weak extremum in general optimal control problem, Zh. Vychisl. Mat. Mat. Fiz. 8(4) (1968) 725–779 (in Russian).
[7]S. Faggian, Hamilton–Jacobi equations arising from boundary control problems with state constraints, SIAM J. Control Optim. 47(4) (2008) 2157–2178.
[8]G. Feichtinger, R.F. Hartl, P.M. Kort, V.M. Veliov, Anticipation effects of technological progress on capital accumulation: a vintage capital approach, J. Econom. Theory 126 (2006) 143–164.
[9]G. Feichtinger, R.F. Hartl, P.M. Kort, V.M. Veliov, Financially constrained capital investments: the effects of disembodied and embodied technological progress, J. Math. Econom. 44 (2008) 459–483.
[10]G. Fiechtinger, G. Tragler, V. Veliov, Optimality conditions for age-structured control systems, J. Math. Anal. Appl. 288 (2003) 47–68.
[11]G. Gripenberg, S.O. Londen, O. Staffans, Volterra Integral and Functional Equations, Cambridge Univ. Press, 1990.
[12]M. Iannelli, Mathematical Theory of Age-Structured Population Dynamics, Giardini Editori, Pisa, 1995.
[13]L.V. Kantorovich, G.P. Akilov, Funkcionalny Analiz (Functional Analysis), Nauka, Moscow, 1984 (in Russian).
[14]M.I. Krastanov, N.K. Ribarska, Ts.Y. Tsachev, Pontryagin maximum principle for infinite-dimensional problems, SIAM J. Control Optim. 49(5) (2011) 2155–2182.
[15]M. Kuhn, S. Wrzaczek, A. Prskawetz, G. Feichtinger, Optimal choice of health and retirement in a life-cycle model, J.Econom. Theory 158 (2015) 186–212.
[16]A.A. Milyutin, A.V. Dmitruk, N.P. Osmolovskii, Maximum Principle in Optimal Control, Moscow State University, Faculty of Mechanics and Mathematics, Moscow, 2004 (in Russian).
[17]C. Saglam, V.M. Veliov, Role of endogenous vintage specific depreciation on the optimal behavior of firms, Int. J. Econ. Theory 4(3) (2008) 381–410.
[18]V.M. Veliov, Optimal control of heterogeneous systems: basic theory, J. Math. Anal. Appl. 346 (2008) 227–242.
[19]G.F. Webb, Theory of Nonlinear Age-Dependent Population Dynamics, Marcel Dekker, 1985.
[20]M.L. Weitzman, Income, Wealth, and the Maximum Principle, Harvard University Press, 2003.
[21]K. Yosida, E. Hewitt, Finitely Additive Measures, Trans. Amer. Math. Soc. 72 (1952) 46–66.
Your journal provides a class file here with a template. I strongly suggest using the template if you have not already.
The first thing you need to do is create a separate .bib file for your bibliography. Open a new document with your editor and save it as your-bib-database.bib
It will be called into the main document when you use \bibliography{<your-bib-database>} and un-comment the template code below it.
The references should look something like this-- I did a couple for you. You can reference more examples and the documentation here.
#article{anita-key,
author = "S. Anita",
title = "{Analysis and Control of Age-Depended Population Dynamics}",
journal = "Kluwer Academic Publishers",
year = "2000",
}
#article{AVD-key,
author = "A.V. Dmitruk, N.P. Osmolovskii",
title = "{Necessary conditions for a weak minimum in optimal control problems with integral equations subject to state and mixed constraints}",
journal = "SIAM J. Control Optim.",
volume = "52",
pages = "3437–3462",
year = "2014",
}
According to the template, you will add each item in your document using from the .bib file using \bibitem{key}
I wonder why there is different in the result for those two cases.
Working Storage:
WS-SUM-LEN PIC S9(4) COMP.
WS-LEN-9000 PIC 9(5) VALUE 9000.
WS-TMP-LEN PIC 9(5).
WS-FIELD-A PIC X(2000).
Case 1) COMPUTE WS-SUM-LEN = WS-LEN-9000 + LENGTH OF WS-FIELD-A
Result: WS-SUM-LEN = 1000
Case 2)
MOVE LENGTH OF WS-FIELD-A TO WS-TMP-LEN
COMPUTE WS-SUM-LEN = WS-LEN-9000 + WS-TMP-LEN
Result: WS-SUM-LEN = 11000
The compiler option is TRUNC(OPT). Why there is no trunc occur for case 2?
Binary fields in IBM's Enterprise COBOL
Warnings
Compiler option TRUNC determines how code for binary fields is generated.
Do not just up and change from your site's default setting of option TRUNC. The different settings for TRUNC can give different results.
Changing from TRUNC(BIN) to TRUNC(STD) will give different results for any values beyond the decimal-values represented by the PICture defining the field. For a signed field, the same applies for the negative value.
01 a-good-name BINARY PIC 99.
ADD 1 TO a-good-name
With TRUNC(STD) the result will be truncated once 99 is reached. With TRUNC(BIN) the result will be truncated once 65535 is reached (if the field were signed, the truncation would be at 99 as before for TRUNC(STD) and 32767 for TRUNC(BIN)).
Changing from TRUNC(BIN) to TRUNC(OPT), without program changes, is only possible if all, entirely all, usage of binary fields is limited to decimal-values represented by the PICture. Particular pieces of code may appear to "work", but it would be a massive coincidence if all use of binary fields gave the same result between the two compiler options, on your system.
It is similar for changing from TRUNC(STD) to TRUNC(OPT). Although the amount of coincidence for working would be smaller, this would increase a false sense of security leaving the potential for subtle differences to be missed for some time.
Changing from genuine use of TRUNC(OPT) to either TRUNC(STD) or TRUNC(BIN) is possible without effort. However, why would you want to?
However, if your use is not genuine (using TRUNC(OPT) with data that does not conform to PICture), then your original results are unreliable, and you will get differences if changing to TRUNC(STD) and likely get difference changing to TRUNC(BIN).
In short, changing the site-default for compiler option TRUNC is something to be considered very carefully, and must include provision for verification of results.
Sites do at times make such a change, the only ones I know of are TRUNC(BIN) (mostly) and TRUNC(STD) to TRUNC(OPT), for performance reasons. These have been done as projects, not just by changing the option and blundering on from there.
Do not override the site-default for TRUNC within systems. If you have programs which are using the same binary data (from files, databases, inter-program communication, messages, or any other way) and they don't all treat the data in the same way, it is asking for trouble.
Some myths
Further explanation will be given later in the text.
There is a difference between TRUNC(BIN) and making all your binary
fields COMP-5 (or COMPUTATIONAL-5).
There is no difference whatsoever. When TRUNC(BIN) is specified, the compiler simply treats all binary fields as COMP-5.
Native-binary is faster than COBOL binary (a binary field with decimal limits defined by the PICture clause).
Although the term itself makes many experienced people think it will be faster ("it'll be like when I code it myself in Assembler") it is in fact slower, on the whole. The slowing-down increases as the field-size increases.
Native-binary (COMP-5/COMPUTATIONAL-5) does not truncate.
It does. It truncates to field-size. Because it truncates to field-size, the intermediate fields must always be larger than the source fields, which means more instructions must be used, and different instructions.
Also, and important to know, the ON SIZE ERROR clause (which can be used with all arithmetic verbs) always only uses the PICture clause to determine that a size-error has occurred. That is, if you have a COMP-5 PIC S9(4), which can contain a maximum positive value of 32,767 and do this:
MULTIPLY that-field BY 10 GIVING that-field
ON SIZE ERROR
DISPLAY "Busted"
END-MULTIPLY
Any value above 9999 will cause the DISPLAY to be processed.
Which really means "don't use ON SIZE ERROR with COMP-5 or TRUNC(BIN)".
TRUNC(OPT) generates optimal code.
In isolation, it does. However this does not preclude further optimisations from compiler option OPTIMIZE/OPT across a wider context.
When using binary fields, always use the maximum PICture for the size of the field
A binary field with 1-4 digits will occupy a half-word, two bytes of storage. With 5-9 digits, a word, or a fullword, of four bytes. With 10-18 digits, a double-word of eight bytes.
The aged recommendation is to always specify four digits, nine digits and 18 digits (well, no-one really goes above nine, do they...?).
This is advice I've received in the past, and given out myself. However, in Enterprise COBOL it is not good advice.
The best advice here is to define the number of digits needed. This will at times improve performance, will never degrade performance, and will make the program easier to understand by best describing the data.
When using binary fields, always make them signed.
More advice I've received and given in the past. Untrue with Enterprise COBOL. If a field can contain a negative value, make it signed. Otherwise make it unsigned.
At times, with interfaces, it is not explicit whether a field should be signed. However, it will be explicit from the maximum value expected. As will the field definition (the USAGE).
For instance, an SQL VARCHAR as a host-variable can have a maximum size of 32767 bytes. Since the actual length is held in a two-byte binary field, the field should be signed. Any value "above" 32767 will be misinterpreted by DB2/SQL.
Since nine decimal digits can fully fit within a word/fullword, there is no problem using nine decimal digits for a COMP/COMP-4/BINARY definition (without TRUNC(BIN)).
Since the compiler has to take care of decimal truncation, and since anything which could lead to truncation would require the "next size up", a binary field of nine digits can require a double-word intermediate field. So requires code to convert to a double-word, and convert the result back from a double-word to a word. If nine digits are required, it will generally be better to define 10 digits and save on the conversions.
Note
The above is all known to hold true for Enterprise COBOL up to V4.2.
IBM has entirely rewritten the code-generation and optimisation (now at two possible levels of optimisation) For Enterprise COBOL V5. There is considerable improvement in the treatment of binary fields, including, for instance, only doing the truncation of values once it is known that truncation is necessary. I am not aware that the use of V5 changes anything here other than the scale of performance differences. All general usage of binary fields should be faster with V5 than with earlier versions of Enterprise COBOL.
Binary fields
COBOL, for binary fields, uses decimal maxima determined by the PICture size.
Such a field with PIC 9 can contain a maximum value of 9 before truncation. If signed, the range of values is -9 to +9. Values outside that range will be truncated.
For PIC 99, 99, and if signed -99 to 99.
For PIC 999, 999,and if signed -999 to 999.
You get the pictu... idea.
It is down to the compiler-implementation as to how those values are stored.
Indeed, according to the Standard, COBOL only recently (1985) had support for binary fields (USAGE BINARY). Which and how actual "non-display" fields were supported was down to USAGE COMPUTATIONAL, whose specifics were compiler-dependent.
Generally across compilers COMP, COMP-1 and COMP-2 (binary, with decimal maxima, short floating-point and long floating-point) are standard, though not part of the Standard. Beyond COMP-2, what the field definitions mean can vary amongst compilers.
So, first recommendation, suggest that your local site standards use BINARY instead of COMP for new code (and PACKED-DECIMAL instead of COMP-3, for packed-decimal fields). BINARY (and COMP-4) within Enterprise COBOL is simply an alias of COMP, so there is absolutely no problem in doing this.
There is another type of binary field, which is the native-binary field. In Enterprise COBOL this is USAGE COMP-5.
COMP-5 has its field-size determined by the PICture definition, but its maxima are that of the full bit-pattern possible for the field size. A PIC S9(4) COMP-5 can contain -32768 to 32767.
Note at this point that a native-binary field, and this may seem counter-intuitive, generally needs more generated machine-code to support its use. This is because it truncates to field-size, rather than PICture.
Note also that there is one place where this does not happen, which is ON SIZE ERROR, which will be true if the value exceeds the PICture size. Which means, to my mind, don't use ON SIZE ERROR with COMP-5 (or TRUNC(BIN), see soon) fields.
Compiler option TRUNC
The compiler option TRUNC defines how machine-code is generated for binary fields. There are three options:
TRUNC(BIN)
Truncation to field-size.
This treats all the non-native-binary fields in the program (COMP/COMP-4/BINARY) as native-binary (as though they had been defined as COMP-5).
This allows the full range of bit patterns to be used, but has impacts on performance.
TRUNC(STD)
Truncation to PICture size.
Generates machine-code for the COBOL Standard truncation to PICture size. PIC 9(n) can contain no more than n significant digits, they will be truncated when the field is a "target" (field value changes).
TRUNC(OPT)
Truncation of any type only used if it happens to be convenient.
I describe this as being a contract between the coder and the compiler. The coder contracts to never (as in never) allow a value to exceed the PICture size. The compiler contracts to always get it right in such a case.
If the coder breaks the contract the coder is entirely to blame for the ensuing rubbish.
When to use each setting of TRUNC (further recommendation)
BIN Never. Use COMP-5 for individual fields where they require access to all bits (pay attention to SQL and CICS "system" fields, and external data from non-Mainframe sources, and inter-language communication between COBOL and JAVA/C/C++ and anywhere else where the data-maxima for a field are beyond the PICture and it is not possible to make the field bigger (as the actual logical definition of the field size is outside your program).
STD use this unless all, as in all, your data always, as in always, conforms to PICture.
OPT use this only, as in only, if all, as in all, your data always, as in always, conforms to PICture.
If you have COMP PIC 99, for instance, you must not, when using OPT, allow that to have a value of 99, and then add one to it. Or anything similar
The Answer
You used TRUNC(OPT), entering into the contract. You immediately broke the contract. It is your fault.
Warning
If your site is using TRUNC(OPT) and not everyone is fully aware of the implications, you will, as in will, have problems.
Substantiation of the Myths from above
There is a difference between TRUNC(BIN) and making all your binary
fields COMP-5 (or COMPUTATIONAL-5).
Define two fields in small program. They should be defined as COMP/COMP-4/BINARY (or COMPUTATIONAL/COMPUTATIONAL-4 if that is your bent).
In the program, add a literal to each of the fields (do this with two separate statements, to make it easier to follow, unless you are experience with the generated code in a listing).
Compile the program with compiler options LIST,NOOFFSET (this will produce, in the compiler listing, output showing the generated machine-code in a so-called "pseudo-assembler" format) and TRUNC(BIN).
Copy the program. In the copy, change the USAGE of the two fields to COMP-5 (or COMPUTATIONAL-5).
Compile this program, again with LIST,NOOFFSET but this time the value for TRUNC is irrelevant as it does not affect COMP-5 fields.
Compare the output listings. If there is one byte difference, eat someone's hat.
Native-binary is faster than COBOL binary (a binary field with decimal limits defined by the PICture clause).
From this discussion at IBM's COBOL Cafe: https://www.ibm.com/developerworks/community/forums/html/topic?id=ae9ef6bc-6e4e-43f8-a814-e66bea25fb8c&ps=25
Here's a multiply of a PIC 9(3) by a PIC 9(5).
With TRUNC(STD)
000023 MULTIPLY
000258 5820 8008 L 2,8(0,8) PIC9-5
00025C 4C20 8000 MH 2,0(0,8) PIC9-3
000260 5020 8010 ST 2,16(0,8)
With TRUNC(BIN)
000019 MULTIPLY
00023C 4820 8030 LH 2,48(0,8) PICS9-4
000240 5840 8038 L 4,56(0,8) PICS9-8
000244 8E40 0020 SRDA 4,32(0)
000248 5D40 C000 D 4,0(0,12) SYSLIT AT +0
00024C 4E50 D120 CVD 5,288(0,13) TS2=16
000250 F154 D110 D123 MVO 272(6,13),291(5,13) TS2=0
000256 4E40 D120 CVD 4,288(0,13) TS2=16
00025A 9110 D115 TM 277(13),X'10' TS2=5
00025E D204 D115 D123 MVC 277(5,13),291(13) TS2=5
000264 4780 B05C BC 8,92(0,11) GN=10(00026C)
000268 9601 D119 OI 281(13),X'01' TS2=9
00026C GN=10 EQU *
00026C 4E20 D120 CVD 2,288(0,13) TS2=16
000270 FC82 D111 D125 MP 273(9,13),293(3,13) TS2=1
000276 D202 D128 C008 MVC 296(3,13),8(12) TS2=24
00027C D204 D12B D115 MVC 299(5,13),277(13) TS2=27
000282 4F20 D128 CVB 2,296(0,13) TS2=24
000286 F144 D12B D110 MVO 299(5,13),272(5,13) TS2=27
00028C 4F50 D128 CVB 5,296(0,13) TS2=24
000290 5C40 C000 M 4,0(0,12) SYSLIT AT +0
000294 1E52 ALR 5,2
000296 47C0 B08E BC 12,142(0,11) GN=11(00029E)
00029A 5A40 C004 A 4,4(0,12) SYSLIT AT +4
00029E GN=11 EQU *
00029E 1222 LTR 2,2
0002A0 47B0 B098 BC 11,152(0,11) GN=12(0002A8)
0002A4 5B40 C004 S 4,4(0,12) SYSLIT AT +4
0002A8 GN=12 EQU *
0002A8 5050 8040 ST 5,64(0,8)
It doesn't take any knowledge of IBM Assembler to work out which of those two pieces of code is going to run more quickly.
The difference in the line-numbers (19 Vs 23) is just down to the fact that TRUNC(BIN) makes the PICture size irrelevant, so where I had three calculations doing the same thing with different size fields, for TRUNC(BIN) the code for each was the same, because the size of each field is the same, a word/fullword of four bytes.
Native-binary (COMP-5/COMPUTATIONAL-5) does not truncate.
See the code immediately above. It is so massive due to the need to provide truncation. The need to provide decimal truncation is down to the COBOL Standard, it's what must happen in the language.
TRUNC(OPT) generates optimal code.
The code generated will always be the most efficient for that code-sequence. The same code-sequence will always generate the same code, before optimisation.
However, the optimizer is capable of spotting that a particular undisturbed state is available for a source-field earlier in the program, and replace part or all of the TRUNC(OPT) code with code relying on the previously-available value.
When using binary fields, always use the maximum PICture for the size of the field
From the same IBM COBOL Cafe discussion referenced above, with these definitions:
01 PIC9-3 BINARY PIC 999.
01 PIC9-5 BINARY PIC 9(5).
01 THE-RESULT8 BINARY PIC 9(8).
01 PIC9-4 BINARY PIC 9(4).
01 PIC9-8 BINARY PIC 9(8).
01 THE-RESULT BINARY PIC 9(8).
And these calculations:
MULTIPLY PIC9-4 BY PIC9-8
GIVING THE-RESULT
MULTIPLY PIC9-3 BY PIC9-5
GIVING THE-RESULT8
Here's the generated code for TRUNC(STD):
000021 MULTIPLY
000248 4830 8018 LH 3,24(0,8) PIC9-4
00024C 5C20 8020 M 2,32(0,8) PIC9-8
000250 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000254 5020 8028 ST 2,40(0,8) THE-RESULT
000023 MULTIPLY
000258 5820 8008 L 2,8(0,8) PIC9-5
00025C 4C20 8000 MH 2,0(0,8) PIC9-3
000260 5020 8010 ST 2,16(0,8)
The first block of pseudo-assembler is with the number of digits in the PICture being the maximum that give the same field-size. A BINARY PIC 9(3) occupies a half-word, and 9(4) is the largest that can appear in a half-word. A PIC 9(5) occupies a word/fullword, and, given Myth 7, eight digits is used for that (to be fair to this particular Myth).
The second block is with the number of digits which represent the data accurately, and which don't happen to require truncation when a multiplication is carried out.
Using the "full-size" PICtures guarantees that unnecessary truncation will always occur.
The difference in the number of instructions is small, and LH is faster than L, so plus to the full-size on that. But M is much slower than L, and MH is slower than L but faster than M. So plus to the optimal size on that. And the D (a divide, which is slow, slow) is not required at all in the second block (because no truncation is required). So bad-boy to the full-size fields on that.
The code for TRUNC(OPT) is also faster for the optimal-size fields, although the difference between the two is not as great (because TRUNC(OPT) in this code-sequence decides it does not need the truncation to base-10 and would not in a million years consider the truncation to field-size).
When using binary fields, always make them signed.
Again from the same IBM COBOL Cafe discussion, here's same-length signed fields Vs unsigned fields, TRUNC(STD):
000019 MULTIPLY
000238 4830 8030 LH 3,48(0,8) PICS9-4
00023C 5C20 8038 M 2,56(0,8) PICS9-8
000240 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000244 5020 8040 ST 2,64(0,8) THE-RESULTS
000021 MULTIPLY
000248 4830 8018 LH 3,24(0,8) PIC9-4
00024C 5C20 8020 M 2,32(0,8) PIC9-8
000250 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000254 5020 8028 ST 2,40(0,8)
The code is different from the above when compiled with both TRUNC(OPT) and TRUNC(BIN), but each of the code-sequences in both cases is identical in those options.
The presence or absence of a sign makes no difference to the code generated.
Except in one case. Where Myth 7 comes into play. With a nine-digit binary, using a signed vs unsigned field does generate less code, but that code generated is more code than if using eight digits.
Since nine decimal digits can fully fit within a word/fullword, there is no problem using nine decimal digits for a COMP/COMP-4/BINARY definition (without TRUNC(BIN)).
From the IBM Enterprise COBOL Version 4 Release 2 Performance Tuning paper, pp32-33:
The following shows the general performance considerations (from most
efficient to least efficient) for the number of digits of precision
for signed binary data items (using PICTURE S9(n) COMP) using
TRUNC(OPT):
n is from 1 to 8
for n from 1 to 4, arithmetic is done in halfword instructions where
possible for n from 5 to 8, arithmetic is done in fullword
instructions where possible
n is from 10 to 17 arithmetic is done in doubleword format
n is 9
fullword values are converted to doubleword format and then doubleword
arithmetic is used (this is SLOWER than any of the above)
n is 18 doubleword values are converted to a higher precision format
and then arithmetic is done using this higher precision (this is the
SLOWEST of all for binary data items)
There is a similar issue with TRUNC(STD). TRUNC(BIN) already has the built-in slowness for the number of digits 1-9, so is not further affected.
From the publicly available documentation :
TRUNC(OPT) is a performance option. When TRUNC(OPT) is in effect, the compiler assumes that data conforms to PICTURE specifications in USAGE BINARY receiving fields in MOVE statements and arithmetic expressions. The results are manipulated in the most optimal way, either truncating to the number of digits in the PICTURE clause, or to the size of the binary field in storage (halfword, fullword, or doubleword).
Tip: Use the TRUNC(OPT) option only if you are sure that the data being moved into the binary areas will not have a value with larger precision than that defined by the PICTURE clause for the binary item. Otherwise, unpredictable results could occur. This truncation is performed in the most efficient manner possible; therefore, the results are dependent on the particular code sequence generated. It is not possible to predict the truncation without seeing the code sequence generated for a particular statement.
Read the "Tip" very carefully and see what it means for your situation. (Hint : it means it does not make sense to ask the question you did because it literally says that "whatever happens, it was unpredictable" or iow "there is no explanation for what happens").
To make the compiler behaviour predictable, switch to either TRUNC(BIN) or TRUNC(STD). STD is good for standards compliance but bad for CPU usage, BIN is good for CPU usage but requires you to be a bit careful (because decimal truncation simply will not happen).
I'm looking for the amount of storage in bytes (MB, GB, TB, etc.) required to store a single human genome. I read a few articles on Wikipedia about DNA, chromosomes, base pairs, genes, and have some rough guess, but before disclosing anything I'd like to see how others would approach this issue.
An alternative question would be how many atoms are there in human DNA, but that would be off topic for this site.
I understand that this will be an approximation, so I'm looking for the minimal value that would be able to store DNA of any human.
If you trust such things, here is what Wikipedia claims (from http://en.wikipedia.org/wiki/Human_genome#Information_content):
The 2.9 billion base pairs of the haploid human genome correspond to a
maximum of about 725 megabytes of data, since every base pair can be
coded by 2 bits. Since individual genomes vary by less than 1% from
each other, they can be losslessly compressed to roughly 4 megabytes.
You do not store all the DNA in one stream, rather most the time it is store by chromosomes.
A large chromosome take about 300 MB and a small one about 50 MB.
Edit:
I think the first reason why it is not saved in 2 bits per base pair is that it would cause an hurdle to work with the data. Most of the people would not know how to convert it. And even when a program for conversion would be given, a lot of people in large companies or research institutes are not allowed to/need to ask or do not know how to install programs...
1GB storage costs nothing, even the download of 3 GB takes only 4 minutes with 100 Mbitsps and most companies have faster speeds.
Another point is that the data isn't as simple as you get told.
e.g. The method for sequencing invented by Craig_Venter was a great breakthrough but has its down sides. It could not separate long chains of the same base pair, so it is not always 100% clear if there are 8 A's or 9 A's. Things you have to take care of later on...
Another example is the DNA methylation because you can't store this Information in a 2-bit representation.
Basically, each base pair takes 2 bits (you can use 00, 01, 10, 11 for T, G, C, and A). Since there are about 2.9 billion base pairs in the human genome, (2 * 2.9 billion) bits ~= 691 megabytes.
I'm no expert, however, the Human Genome page on Wikipedia states the following:
Raw MB:
Male (XY): 770MB
Female (XX): 756MB
I'm not sure where their variance comes from, but I'm sure you can figure it out.
Yes, the minimum storage space needed for whole human DNA is about 770 MB.
However, the 2-bit representation is impractical. It is hard to search through or do some computations on it. Therefore, some mathematicians designed more effective way to store those sequencies of bases and use them in searching and comparation algorithms. One such example is GARLI.
This application runs on my PC right now, and I have the human genome stored in 1563 MB.
The human genome contains over 3 billion base pairs. So if you represented each base pair as two bits then it would take over 6.15 × 10⁹ bits or approximately 770 MB.
just did it too. the raw sequence is ~700 MB. if one uses a fixed storage sequence or a fixed sequence storage algoritm - and the fact that the changes are 1% i calcuated ~120 MB with a perchromosome-sequenceoffset-statedelta storage. that's it for the storage.
There are 4 nucleotide bases that make up our DNA these are A,C,G,T therefore for each base in the DNA takes up 2bits. There are around 2.9billion bases so thats around 700 megabytes. The weird thing is that would fill a normal data cd! coincidence?!?
All answers are leaving off the fact that nuDNA is not the only DNA that defines a human genome. mtDNA is also inherited and it contributes an additional 16,500 base pairs to a human genome, bringing it more in line with the Wikipedia guess of 770MB for males, and 756MB for females.
This does not mean that a human genome can easily be stored on an 4GB USB stick. Bits do not represent information by themselves, it is the combination of bits that represent information. So in the case of nuDNA and mtDNA, the bits are encoded (not to be confused with compressed) to represent proteins and enzymes that in themselves would requires many MBs of raw data to represent, especially in terms of functionality.
Food for thought: 80% of the human genome is called "non-coding" DNA, so did you actually really believe that the entire human body and brain can be represented in a mere 151 to 154MBs of raw data?
Most answers except users slayton, rauchen, Paul Amstrong are dead wrong if its about pure storage one-on-one without compression techniques.
The human genome with 3Gb of nucleotides correspond with 3Gb of bytes and not ~750MB. The constructed "haploid" genome according to NCBI is currently 3436687kb or 3.436687 Gb in size. Check here for yourself.
Haploid = single copy of a chromosome.
Diploid = two versions of haploid.
Humans have 22 unique chromosomes x 2 = 44.
Male 23rd chromosome is X, Y and makes 46 in total.
Females 23rd chrom. is X, X and thus makes 46 in total.
For males it would be 23 + 1 chromosome in data storage on a HDD and for females 23 chromosomes, explaining the little differences mentioned now and then in answers. The X chrom. from males is equal to X chrom. from the females.
Thus loading the genome (23 + 1) into memory is done in parts via BLAST using constructed databases from fasta-files. Regardless of zipped versions or not nucleotides are hardly to be compressed. Back in the early days one of the tricks used was to replace tandem repeats (GACGACGAC with shorter coding e.g. "3GAC"; 9byte to 4byte). The reason was to save harddrive space (area of the 500bm-2GB HDDD platters with 7.200 rpm and SCSI connectors). For sequence searching this was also done with the query.
If "coded nucleotide" storage would be 2-bit per letter then you get for a byte:
A = 00
C = 01
G = 10
T = 11
Only this way you fully profit from positions 1,2,3,4,5,6,7 and 8 for 1 byte of coding. For example the combination 00.01.10.11 (as byte 00011011) would then correspond for "ACTG" (and show in a textfile as an unrecognizable character). This alone is responsible for a four times reduction in file-size as we see in other answers. Thus 3.4Gb will be downsized to 0.85917175 Gb... ~860MB including a then required conversion program (23kb-4mb).
But... in biology you want to be able to read something thus compression gzipped is more than enough. Unzipped you can still read it. If this byte filling was used it becomes harder to read the data. That's why fasta-files are plain-text files in reality.
There is only 2 types of base pairs, Cytosine can only bind to Guanine, and Adenine can only bind to thymine,
So each base pair can be considered a single bit.
This means that an entire strand of Human DNA ~3 billion "Bits" would be right around ~350 megabytes.
One base -- T, C, A, G (in the base-4 number system: 0, 1, 2, 3) -- is encoded as two bits (not one), so one base pair is encoded by four bits.