Was there something in Cobol intrinsically making it susceptible to Y2K issues? - cobol

I know that a lot of Y2K efforts/scare was somehow centered on COBOL, deservedly or not.
(heck, I saw minor Y2K bug in a Perl scripts that broke 1/1/2000)
What I'm interested in, was there something specific to COBOL as a language which made it susceptible to Y2K issues?
That is, as opposed to merely the age of most programs written in it and subsequent need to skimp on memory/disk usage driven by old hardware and the fact that nobody anticipated those programs to survive for 30 years?
I'm perfectly happy if the answer is "nothing specific to COBOL other than age" - merely curious, knowing nothing about COBOL.

It was 80% about storage capacity, pure and simple.
People don't realize that the capacity of their laptop hard drive today would have cost millions in 1980. You think saving two bytes is silly? Not when you have a 100,000 customer records and a hard drive the size of a refrigerator held 20 megabytes and required a special room to keep cool.

Yes and No. In COBOL you had to declare variables such that you actually had to say how many digits there were in a number i.e., YEAR PIC 99 declared the variable YEAR such that it could only hold two decimal digits. So yes, it was easier to make that mistake than in C were you would have int or short or char as the year and still have plenty of room for years greater than 99. Of course that doesn't protect you from printfing 19%d in C and still having the problem in your output, or making other internal calculations based on thinking the year would be less than or equal to 99.

It seemed to be more a problem of people not knowing how long their code would be used, so they chose to use 2 digit years.
So, nothing specific to COBOL, it is just that COBOL programs tend to be critical and old so they were more likely to be affected.

Was there something in Cobol intrinsically making it susceptible to Y2K issues?
Programmers1. And the systems where COBOL programs run2.
1: They didn't design forward looking 30 years. I can't blame them really. If I had memory constraints, between squeezing 2 bytes per date and making it work 30 years latter, most likely I would make the same decision.
2: The systems could have had the same problem if the hardware stored the year in two digits.

Fascinating question. What is the Y2K problem, in essence? It's the problem of not defining your universe sufficiently. There was no serious attempt to model all dates, because space was more important (and the apps would be replaced by then). So in Cobol at every level, that's important: to be efficient and not overdeclare the memory you need, both at the store and at the program level.
Where efficiency is important, we commit Y2Kish errors... We do this every time we store a date in the DB without a timezone. So modern storage is definitely subject to Y2Kish errors, because we try to be efficient with space used (though I bet it's over-optimizing in many cases, especially at the enterprise overdo-everything level).
On the other hand, we avoid Y2Kish errors on the application level because every time you work with, say, a Date (in Java, let's say) it always carries around a ton of baggage (like timezone). Why? Because Date (and many other concepts) are now part of the OS, so the OS-making smart dudes try to model a full-blown concept of date. Since we rely on their concept of date, we can't screw it up... and it's modular and replaceable!
Newer languages with built-in datatypes (and facilities) for many things like date, as well as huge memory to play with, help avoid a lot of potential Y2Kish problems.

It was two part. 1- the age/longevity of Cobol software, and 2- the 80 character limit of data records.
First- Most software of that age used only 2 digit numbers for year storage, since no one figured their software would last that long! COBOL had been adopted by the banking industry, who are notorious for never throwing away code. Most other software WAS thrown away, while the banks didn't!
Secondly, COBOL was constrained to 80 characters per record of data (due to the size of punch cards!), developers were at an even greater pressure to limit the size of fields. Because they figured "year 2000 won't be here till I'm long and retired!" the 2 characters of saved data were huge!

It was much more related to storing the year in data items that could only hold values from 0 to 99 (two characters, or two decimal digits, or a single byte). That and calculations that made similar assumptions about year values.
It was hardly a Cobol-specific thing. Lots of programs were impacted.

There were some things about COBOL that aggravated the situation.
it's old, so less use of library code, more homegrown everything
it's old, so pre-internet, pre-social-networking, more NIH, fewer frameworks, more custom stuff
it's old, so, less abstract, more likely to have open-coded solutions
it's old, so, go back far enough and saving 2 bytes might have, sort of, been important
it's old, so, so it predates SQL. Legacy operating software even had indexed record-oriented disk files to make rolling-your-own-database-in-every-program a little bit easier.
"printf" format strings and data type declaration were the same thing, everything had n digits
I've seen giant Fortran programs with no actual subroutines. Really, one 3,000-line main program, not a single non-library subroutine, that was it. I suppose this might have happened in the COBOL world, so now you have to read every line to find the date handling.

COBOL never came with any standard date handling library.
So everyone coded their own solution.
Some solutions were very bad vis-a-vis the millennium. Most of those bad solutions did not matter as the applications did not live 40+ years. The not-so tiny minority of bad solutions cause the well-known Y2K problem in the business world.
(Some solutions were better. I know of COBOL systems coded in the 1950s with a date format good until 2027 -- must have seemed forever at the time; and I designed systems in the 1970s that are good until 2079).
However, had COBOL had a standard date type....
03 ORDER-DATE PIC DATE.
....Industry wide solutions would have been available at the compiler level, cutting the complexity of any remediation needed.
Moral: use languages with standard libraries.

COBOL 85 (the 1985 standard) and earlier versions didn't have any way to obtain the current century**, which was one factor intrinsic to COBOL that discouraged the use of 4-digit years even after 2 bytes extra storage space was no longer an issue.
** Specific implementations may have had non standard extensions for this purpose.

The only intrinsic issue with Cobol was it's original (late 1960s) standard statement for retrieving the current system date, which was:
ACCEPT todays-date FROM DATE
This returned a 6-digit number with the date in YYMMDD format.
However, even this was not necessarily a problem as we wrote code in the 90's using this statement which just checked if the year portion was less than 70 and assumed that the date was 20YY, which would have made it a Y2K070 problem. :-)
The standard was extended later (COBOL-85, I think) so you could ask for the date in different formats, like:
ACCEPT todays-date FROM CENTURY-DATE
Which gave you an 8-digit number with the date as CCYYMMDD.
As you, and others have pointed out, many other computer programming languages allowed for 'lossy' representation of dates/years.

The problem was really about memory and storage constraints in the late 70s early 80s.
When your quarter of a million bucks computer had 128K and 4 disks totalling about 6 megabytes you could either ask your management for another quarter mill for a 256K machine with 12 meg of disk storage or be very very efficient about space.
So all sorts of space saving tricks were usered. My favourite was to store YYMMDD date as 991231 in a packed decimal field x'9912310C' then knock of the last byte and store it as '991231'. So instead of 6 bytes you only took up 3 bytes.
Other tricks included some hokey Hooky type encodeing for prices -- code 12 -> $19.99.

Related

Do CPUs make mistakes?

Imagine that a regular computer intensively works for 5 years non-stop. The CPU always works at 100% and is constantly reading and writing to memory. Is it true that the computer will not make a single mistake?
Even in the absence of any errors caused by the CPU, storage elements are subject to bit flips (known as Single Event Upsets) from cosmic radiation. More information on that in Compiling an application for use in highly radioactive environments.
Radiation effects are more severe at higher altitudes, where the atmosphere provides less protection, so computers in Denver experience more bit flips than computers in Miami or Los Angeles. And similarly if you are designing equipment for use in a hospital near an X-ray machine.
Unless your hypothetical computer has an extremely small amount of memory, it is unlikely to work without any mistake for 5 years. Note however that some of the bit flips may occur in parts of the memory that you are not using, in which case they won't affect you.
You may find it interesting to read How to Kill a Supercomputer. Typical ECC (Error Correcting Code) memory can correct any single bit flip in a word, and can detect but not correct any two bit flips in a word. Note also that in some cases radiation can permanently damage memory cells, and those cells will never recover even after a cold start.

System trading applications - caching historical data in local database?

Most trading applications receive datafeed from commerical providers such as IQFeed or brokerages that support trading API. Is there merit in storing it in the local database? Intraday datafeed is just massive in size, and the database would grow exponentially with 1 minute data for just 50 stocks, never mind tick-by-tick data. I suspect this would be a nightmare for database backup and may impact performance.
If you get historical data in text files on DVD or online, then storing it in the database is the only logical choice, but would it be still a good idea if you get it through API?
Its all about storage space really. You can definitely do it through API, but make sure you don't do it using the same application that is doing the automated trading for you.
As you said Tick Data is pretty much out of question, for a 1 minute data that would mean approximately 400 bars/day and 20000 bars for 50 symbols.
The calculation space can be calculated based on that, if you are storing OLHC it can be achieved with four values of type Int.
As the other answer pointed out, performance may be an issue with more and more symbols but shouldn't be a problem with 50 symbols on 1 minute bars.
This is a performance question. If the API is fast enough, then use that. If it's not and caching will help, then cache it. Only your application and your usage patterns can determine how much truth and necessity apply to these statements.

Thoughts on minimize code and maximize data philosophy

I have heard of the concept of minimizing code and maximizing data, and was wondering what advice other people can give me on how/why I should do this when building my own systems?
Typically data-driven code is easier to read and maintain. I know I've seen cases where data-driven has been taken to the extreme and winds up very unusable (I'm thinking of some SAP deployments I've used), but coding your own "Domain Specific Languages" to help you build your software is typically a huge time saver.
The pragmatic programmers remain in my mind the most vivid advocates of writing little languages that I have read. Little state machines that run little input languages can get a lot accomplished with very little space, and make it easy to make modifications.
A specific example: consider a progressive income tax system, with tax brackets at $1,000, $10,000, and $100,000 USD. Income below $1,000 is untaxed. Income between $1,000 and $9,999 is taxed at 10%. Income between $10,000 and $99,999 is taxed at 20%. And income above $100,000 is taxed at 30%. If you were write this all out in code, it'd look about as you suspect:
total_tax_burden(income) {
if (income < 1000)
return 0
if (income < 10000)
return .1 * (income - 1000)
if (income < 100000)
return 999.9 + .2 * (income - 10000)
return 18999.7 + .3 * (income - 100000)
}
Adding new tax brackets, changing the existing brackets, or changing the tax burden in the brackets, would all require modifying the code and recompiling.
But if it were data-driven, you could store this table in a configuration file:
1000:0
10000:10
100000:20
inf:30
Write a little tool to parse this table and do the lookups (not very difficult, right?) and now anyone can easily maintain the tax rate tables. If congress decides that 1000 brackets would be better, anyone could make the tables line up with the IRS tables, and be done with it, no code recompiling necessary. The same generic code could be used for one bracket or hundreds of brackets.
And now for something that is a little less obvious: testing. The AppArmor project has hundreds of tests for what system calls should do when various profiles are loaded. One sample test looks like this:
#! /bin/bash
# $Id$
# Copyright (C) 2002-2007 Novell/SUSE
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation, version 2 of the
# License.
#=NAME open
#=DESCRIPTION
# Verify that the open syscall is correctly managed for confined profiles.
#=END
pwd=`dirname $0`
pwd=`cd $pwd ; /bin/pwd`
bin=$pwd
. $bin/prologue.inc
file=$tmpdir/file
okperm=rw
badperm1=r
badperm2=w
# PASS UNCONFINED
runchecktest "OPEN unconfined RW (create) " pass $file
# PASS TEST (the file shouldn't exist, so open should create it
rm -f ${file}
genprofile $file:$okperm
runchecktest "OPEN RW (create) " pass $file
# PASS TEST
genprofile $file:$okperm
runchecktest "OPEN RW" pass $file
# FAILURE TEST (1)
genprofile $file:$badperm1
runchecktest "OPEN R" fail $file
# FAILURE TEST (2)
genprofile $file:$badperm2
runchecktest "OPEN W" fail $file
# FAILURE TEST (3)
genprofile $file:$badperm1 cap:dac_override
runchecktest "OPEN R+dac_override" fail $file
# FAILURE TEST (4)
# This is testing for bug: https://bugs.wirex.com/show_bug.cgi?id=2885
# When we open O_CREAT|O_RDWR, we are (were?) allowing only write access
# to be required.
rm -f ${file}
genprofile $file:$badperm2
runchecktest "OPEN W (create)" fail $file
It relies on some helper functions to generate and load profiles, test the results of the functions, and report back to users. It is far easier to extend these little test scripts than it is to write this sort of functionality without a little language. Yes, these are shell scripts, but they are so far removed from actual shell scripts ;) that they are practically data.
I hope this helps motivate data-driven programming; I'm afraid I'm not as eloquent as others who have written about it, and I certainly haven't gotten good at it, but I try.
In modern software the line between code and data can become awfully thin and blurry, and it is not always easy to tell the two apart. After all, as far as the computer is concerned, everything is data, unless it is determined by existing code - normally the OS - to be otherwise. Even programs have to be loaded into memory as data, before the CPU can execute them.
For example, imagine an algorithm that computes the cost of an order, where larger orders get lower prices per item. It is part of a larger software system in a store, written in C.
This algorithm is written in C and reads a file that contains an input table provided by the management with the various per-item prices and the corresponding order size thresholds. Most people would argue that a file with a simple input table is, of course, data.
Now, imagine that the store changes its policy to some sort of asymptotic function, rather than pre-selected thresholds, so that it can accommodate insanely large orders. They might also want to factor in exchange rates and inflation - or whatever else the management people come up with.
The store hires a competent programmer and she embeds a nice mathematical expression parser in the original C code. The input file now contains an expression with global variables, functions such as log() and tan(), as well as some simple stuff like the Planck constant and the rate of carbon-14 degradation.
cost = (base * ordered * exchange * ... + ... / ...)^13
Most people would still argue that the expression, even if not as simple as a table, is in fact data. After all it is probably provided as-is by the management.
The store receives a large amount of complaints from clients that became brain-dead trying to estimate their expenses and from the accounting people about the large amount of loose change. The store decides to go back to the table for small orders and use a Fibonacci sequence for larger orders.
The programmer gets tired of modifying and recompiling the C code, so she embeds a Python interpretter instead. The input file now contains a Python function that polls a roomfull of Fib(n) monkeys for the cost of large orders.
Question: Is this input file data?
From a strict technical point, there is nothing different. Both the table and the expression needed to be parsed before usage. The mathematical expression parser probably supported branching and functions - it might not have been Turing-complete, but it still used a language of its own (e.g. MathML).
Yet now many people would argue that the input file just became code.
So what is the distinguishing feature that turns the input format from data into code?
Modifiability: Having to recompile the whole system to effect a change is a very good indication of a code-centric system. Yet I can easily imagine (well, more like I have actually seen) software that has been designed incompetently enough to have e.g. an input table built-in at compile time. And let's not forget that many applications still have icons - that most people would deem data - built in their executables.
Input format: This is the - in my opinion, naively - most common factor that people consider: "If it is in a programming language then it is code". Fine, C is code - you have to compile it after all. I would also agree that Python is also code - it is a full blown language. So why isn't XML/XSL code? XSL is a quite complex language in its own right - hence the L in its name.
In my opinion, none of these two criteria is the actual distinguishing feature. I think that people should consider something else:
Maintainability: In short, if the user of the system has to hire a third party to make the expertise needed to modify the behaviour of the system available, then the system should be considered code-centric to a degree.
This, of course, means that whether a system is data-driven or not should be considered at least in relation to the target audience - if not in relation to the client on a case-by-case basis.
It also means that the distinction can be impacted by the available toolset. The UML specification is a nightmare to go through, but these days we have all those graphical UML editors to help us. If there was some kind of third-party high-level AI tool that parses natural language and produces XML/Python/whatever, then the system becomes data-driven even for far more complex input.
A small store probably does not have the expertise or the resources to hire a third party. So, something that allows the workers to modify its behaviour with the knowledge that one would get in an average management course - mathematics, charts etc - could be considered sufficiently data-driven for this audience.
On the other hand, a multi-billion international corporation usually has in its payroll a bunch of IT specialists and Web designers. Therefore, XML/XSL, Javascript, or even Python and PHP are probably easy enough for it to handle. It also has complex enough requirements that something simpler might just not cut it.
I believe that when designing a software system, one should strive to achieve that fine balance in the used input formats where the target audience can do what they need to, without having to frequently call on third parties.
It should be noted that outsourcing blurs the lines even more. There are quite a few issues, for which the current technology simply does not allow the solution to be approachable by the layman. In that case the target audience of the solution should probably be considered to be the third party to which the operation would be outsourced to.
That third party can be expected to employ a fair number of experts.
One of five maxims under the Unix Philosophy, as presented by Rob Pike, is this:
Data dominates. If you have chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
It is often shortened to, "write stupid code that uses smart data."
Other answers have already dug into how you can often code complex behavior with simple code that just reacts to the pattern of its particular input. You can think of the data as a domain-specific language, and of your code as an interpreter (maybe a trivial one).
Given lots of data you can go further: the statistics can power decisions. Peter Norvig wrote a great chapter illustrating this theme in Beautiful Data, with text, code, and data all available online. (Disclosure: I'm thanked in the acknowledgements.) On pp. 238-239:
How does the data-driven approach compare to a more traditional software development
process wherein the programmer codes explicit rules? ... Clearly, the handwritten rules are difficult to develop and maintain. The big
advantage of the data-driven method is that so much knowledge is encoded in the data,
and new knowledge can be added just by collecting more data. But another advantage is
that, while the data can be massive, the code is succinct—about 50 lines for correct, compared to over 1,500 for ht://Dig’s spelling code. ...
Another issue is portability. If we wanted a Latvian spelling-corrector, the English
metaphone rules would be of little use. To port the data-driven correct algorithm to another
language, all we need is a large corpus of Latvian; the code remains unchanged.
He shows this concretely with code in Python using a dataset collected at Google. Besides spelling correction, there's code to segment words and to decipher cryptograms -- in just a couple pages, again, where Grady Booch's book spent dozens without even finishing it.
"The Unreasonable Effectiveness of Data" develops the same theme more broadly, without all the nuts and bolts.
I've taken this approach in my work for another search company and I think it's still underexploited compared to table-driven/DSL programming, because most of us weren't swimming in data so much until the last decade or two.
In languages in which code can be treated as data it is a non-issue. You use what's clear, brief, and maintainable, leaning towards data, code, functional, OO, or procedural, as the solution requires.
In procedural, the distinction is marked, and we tend to think about data as something stored in an specific way, but even in procedural it is best to hide the data behind an API, or behind an object in OO.
A lookup(avalue) can be reimplemented in many different ways during its lifetime, as long as its starts as a function.
...All the time I desing programs for nonexisting machines and add: 'if we now had a machine comprising the primitives here assumed, then the job is done.'
... In actual practice, of course, this ideal machine will turn out not to exist, so our next task --structurally similar to the original one-- is to program the simulation of the "upper" machine... But this bunch of programs is written for a machine that in all probability will not exist, so our next job will be to simulate it in terms of programs for a next lower level machine, etc., until finally we have a program that can be executed by our hardware...
E. W. Dijkstra in Notes on Structured Programming, 1969, as quoted by John Allen, in Anatomy of Lisp, 1978.
When I think of this philosophy which I agree with quite a bit, the first thing that comes to mind is code efficiency.
When I'm making code I know for sure it isn't always anything close to perfect or even fully knowledgeable. Knowing enough to get close to maximum efficiency out of a machine when it is needed and good efficiency the rest of the time (perhaps trading off for better workflow) has allowed me to produce high quality finished products.
Coding in a data driven way, you end up using code for what code is for. To go and 'outsource' every variable to files would be foolishly extreme, the functionality of a program needs to be in the program and the content, settings and other factors can be managed by the program.
This also allows for much more dynamic applications and new features.
If you have even a simple form of database, you are able to apply the same functionality to many states. You may also do all manner of creative things like changing the context of what your program is doing based on file header data or perhaps directory, file name or extension, though not all data is necessarily stored on a filesystem.
Finally keeping your code in a state where it is simply handling data puts you in a state of mind where you are closer to envisioning what is actually going on. This also keeps the bulk out of your code, greatly reducing bloatware.
I believe it makes code more maintainable, more flexible and more efficient aaaand I like it.
Thank you to the others for your input on this as well! I found it very encouraging.

Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

<tl;dr>
In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches?
</tl;dr>
<introduction>
I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this.
So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing.
So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it.
Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged.
Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons.
</introduction>
<optimizations>
Chop the compared sequences into multiple subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence.
Pro: Changes the time increase as the changed area gets bigger from a quadratic
increase to something more similar to a linear increase.
Con: Figuring out where to split already seems like you have to solve edit distance
except now you don't care how it is changed. Would be fine if this was solvable by
a process closer to solving hamming distance but a single insertion would throw this
off.
Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves.
Pro: The operation of comparing two integers is faster than the operation of comparing
two strings, so a slight performance gain is received after every comparison, which
can be a lot overall.
Con: Using a cryptographic hash function takes time to convert all the sequence
elements and may end up costing more time to do the conversion that you gain back from
the integer comparisons. You could use the built in hash function for a string but
that will not guarantee uniqueness.
Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here.
Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst
case scenario is that time, best case becomes practically linear, and average case is
somewhere between the two.
Con: It is an algorithm I've only seen implemented in functional programming languages
and I am having a difficult time comprehending how to convert this into Ruby based on
how it is described at the site linked to above.
Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs.
Pro: I have to imagine that evaluating something like this in could be a LOT faster.
Con: I have no idea how Rails handles apps with ruby code that has C extensions and it
hurts the portability of the app.
This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n).
Pro: Would make going back to an old version a lot faster.
Con: It would mean every new commit, the delta-tree would get a new root node that
will cost time to reorganize the delta-tree for an operation that will be carried out
a lot more often than going back a version, not to mention the unlikelihood it will be
an old version.
</optimizations>
So are these things worth the effort?
With regard to item 4 in your list, this seems to be ( from what I can tell ) how most gems work if there is any heavy lifting to be done by the code. Rails plays nice with the gem system, so you should find that if you need to incorporate this - probably alongside other optimisations you have suggested here - it should be fine, although you may need to recompile for different platforms.

Does anyone know what was/is used as the DBMS for the infamous NSA call database?

Another question on SO suddenly got me wondering what the largest database in the world is (and how big it could be). A quick Google search turned up this: the NSA call database, created by the U.S. National Security Agency. Supposedly this database contains over 1.9 trillion records containing details relating to phone calls placed through AT&T and Verizon from as far back as 2001.
Does anyone have any idea what kind of DB system was used for this database? 1.9 trillion records seems to me like a lot more than even your typical large-scale commercial databases would have. But maybe I'm wrong. I also didn't research this extensively by any means, so perhaps the claim that the NSA call database is the biggest in the world is flat-out false.
Still, I'm interested to know what kind of DBMS, if any, could reasonably deal with this many records.
1.9 trillion rows multiplied by, say, 8000 bytes/row is, ummm, 15 petabytes? Did I do that arithmetic right? That's just one order of magnitude bigger than several well-known business databases. Googling "petabyte databases" gave me
ebay: one 2+ petabyte data warehouse
and one 6+ petabyte data warehouse
(2009)
facebook: 2+ petabyte data warehouse
(2010)
Walmart: 2+ petabyte data warehouse
(2010)
Bank of America: 1+ petabyte data
warehouse (2010)
Dell: 1+ petabyte data warehouse
(2010)
1.9 trillion rows are easily (cough) row-addressable in the range of a 64-bit unsigned int.
Physicists and astronomers seem to have the biggest targets. Stanford needs to manage about 155 petabytes of data for their Large Synoptic Survey Telescope. An astronomy project down the street from me generates about 10 petabytes a day, but they don't store nearly that much.
Heck, I almost forgot the point of the question. Greenplum and Teradata showed up the most often. But I don't think anybody who knows what the NSA actually uses will talk about it.
#Tomislav Nakic-Alfirevic: An awk program to print every 1000th line:
NR % 1000 == 0 {print $0}
Do you think the NSA would pay me for that? My house needs a new roof.

Resources