Moving away from Itanium [closed] - cobol

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We currently have a large business-critical application written in COBOL, running on OpenVMS (Integrity/Itanium).
As the months pass, there is more and more speculation about the lifetime of the Itanium architecture. Nothing is said out in the open, of course, but articles like this and this paint a worrying picture. Although I can find nothing official to support this, there are even murmurings in the corridors of our company of HP ditching OpenVMS and HP COBOL along with it.
I cannot believe that we are alone in this.
The way I see it, there are a few options:
Emulate some old hardware and run the application on that using a product like CHARON-VAX or CHARON-AXP. The way I see it, the pros are that the process should be relatively painless, especially if the 64-bit (AXP) option is used. Potential cons are a degradation in performance (although this should be offset by faster and faster hardware);
Port the HP COBOL-based application to a more modern dialect of COBOL, such as Visual COBOL. The pros, then, are the fact that the porting effort is relatively low (it's still COBOL) and the fact that one can run the application on a Unix or Windows platform. The cons are that although you're porting COBOL, the fact that you're porting to a different operating system could make things tricky (esp. if there are OpenVMS-specific dependencies);
Automatically translate the COBOL into a more modern language like Java. This has the obvious benefit of immediately freeing one from all the legacy issues in one fell swoop: hardware support, operating system support, and especially finding administrators and programmers. Apart from this being a big job, an obvious downside is the fact that one will end up with non-idiomatic Java (or whatever target language is ultimately chosen); arguably, this is something that can be ameliorated over time.
A rewrite, from the ground up (naturally, using modern technologies). Anyone who has done this knows how expensive and time-consuming it is. I've only included it to make the list complete :)
Note that there is no dependency on a proprietary DBMS; the database is ISAM file-based.
So ... my question is:
What are others faced with the imminent obsolescence of Itanium doing to maintain business continuity when their platform of choice is OpenVMS and COBOL?
UPDATE:
We have had an official assurance from our local HP representative that Integrity/Itanium/OpenVMS will be supported at least up until 2022. I guess this means that this whole issue is less about the platform, and more about the language (COBOL).

The main problem with this effort will be the portions of the code that are OpenVMS specific. Most applications developed on OpenVMS typically use routines and procedures that are not easily ported to another platform. Rather that worry about specific language compatibility, I would initially focus on the runtime routines and command procedures used by the application.
An alternative approach may be to continue to use the current application while developing a new one or modifying a commercially available application to suit your needs. While the long term status of Itanium is in question, history indicates that OpenVMS will remain viable for some time to come. There are still VAX machines being used today for business critical applications. The fact that OpenVMS and its hardware is stable is the main reason for its longevity.
Dan

Looks like COBOL is the main dependency that keeps you worried. I undrestand Itanium+OpenVMS in this picture is just a platform.
You're definitely not alone running mission-critical stuff on OpenVMS. HP site has OpenVMS roadmap (both Alpha and Integrity), support currently stretches to 2015. Oracle seems trying to leverage it's SUN asset in different domains recently.
In any case, if your worries are substantial (sure we all worried about COMPAQ, then HP, vax>>alpha>>Itanium transitions in the past), there's time to un-tie the COBOL dependency.
So I would look now into charting out migration path from COBOL onto more portable language of choice (eg. C/C++ ANSII without platform extensions). Perhaps Java isn't the frendliest choice, given Oracle's activity. Re-write, how unpleasant it is, will be more progressive and likely will streamline the whole process. The sooner one starts, the sooner one completes.
Also, in addition to emulators, there're still plenty of second-hand hardware. Ironically, one company I know just now phases-in Integrity platforms to supplant misson-critical Alphas -- I guess, it's "corporate testing requirements"...
Do-nothing is an option as well, though obviously riskier: OpenVMS platforms are proven to be dependable, so alternatively, finding a reliable third-party support company may extend your future hardware contingency.

This summer's Rolling Roadmap makes porting off OpenVMS look like an excellent idea.
Given how much COBOL exists in the world finding people to support COBOL will not be a problem for the foreseeable future. As noted above there are COBOL compilers on other platforms. The problem lies in the OpenVMS system service calls and DEC language extensions your application uses. You do not mention where your data is stored, so worst case your COBOL uses RMS. There is a company that provides an implementation of many OpenVMS system services on Linux and the Unixes. Not needing to replace those services while porting to another operating system may reduce the complexity. Check out Sector7.com.

Related

Considering Porting App from .NET to Erlang - need advice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking at Erlang for a future version of a distributed soft-real-time hosted web-based telephony app (i.e. Erlang looks like absolutely the perfect choice for this kind of app). I come from a .NET background and the current version of this app uses a combination of C#, WCF and JQuery to deliver the service. I now need Erlang to allow me to add extra 9s to my up-time and to allow me to get more bang for my server bucks.
Previously I'd set up a development process here combining VS.NET, GIT, TeamCity and auto-deployment of MSI files to the various environments we maintain. It's not perfect, but we're all now pretty comfortable with it. I'm wondering whether a process like we have is even appropriate for such a radically different technology stack (LYME)?
I'm confident that all of the programming challenges we previously solved using .NET can be better solved in less code with Erlang, so I'm completely sold on the language choice. What I don't yet understand from reading the Pragmatic and O'Reilly books on Erlang, is how I should adapt my software engineering and application life-cycle management (ALM) processes to suit the new platform. I see that in-place code updates could make my (and my testing and ops team's) life much easier (compared to the god-awful misery of trying to deploy MSI files across a windows network) but I am not sure how things should change when I use Erlang.
How would you:
do continuous integration in Erlang (is it commonly used?)
use it during a QA cycle (we often run concurrent topic branches using GIT, that get their own mini-QA cycle, so they all get deployed into a test environment)
build and distribute your code to DEV, TEST, UAT, STAGING, and PROD environments
integrate code generation phases into your build cycle (we currently use MSBUILD + T4 templates)
centralize logging for a bunch of different servers (we currently use Log4Net, MSMQ, etc)
do alerting with tools like SCOM
determine whether someone/something has misconfigured your production servers
allow production hot-fixes only after adequate QA (only by authorized personnel)
profile the performance (computation and communication) of your apps
interact with windows-based active directory servers
I guess I need to know what worked for you and why! What tools and frameworks did you use? What did you try that failed? What would you do differently if you could start over, knowing what you know now?
Whoa, what a long post. First, you should be aware that the 99.9% and better kool-aid is a bit dangerous to drink while blind. Yes, you can get some astounding stability figures, but you need to write your program in a way facilitating this. It does not come for free. It does not happen by magic either. Your application must be designed in a way such that other subsystems recover. OTP will help you a lot - but it still takes time to learn.
Continuous integration: Easily done. If you can call rebar or make through your build-bot you are probably set here already. Look into eunit, cover and Erlang QuickCheck (the mini variant is free for starters) - all can be run from rebar.
QA Cycle: I have not had any problems here. Again, if using rebar you can build embedded releases that are minimized erlang vm's you can copy anywhere and run (they are self-contained). You can even hot deploy fixes to such a system pretty easily by altering the code path a bit so you have an overlay of newer fixes. Your options are numerous. Git already help you here a lot.
Environmentalization: Easily done.
Logging centralization: Look into SASL and the error_logger. You can do anything you want here.
Alerting: The system can be probed for all you need (introspection is strong in Erlang). But you might have to code a bit to hook it up to the system of your choice.
Misconfiguration: Configuration files are Erlang terms. If it can be computed, it can be done.
Security: Limit who has access. It is a people problem, not a technical one in my opinion.
Profiling: cprof, cover, eprof, fprof, instrument + a couple of distributed systems for doing the same. Random sampling is also easy (introspection is strong in Erlang).
Windows interaction: Dunno. (Bias: last time I used windows professionally was in 1998 or so).
Some personal observations:
Your largest problem might end up being that you try to cram Erlang into your existing process and it might resist. It is a new environment, so new approaches will be needed in places and you should expect to adapt and workaround limitations you find along the way. The general consensus is that it can work (it is working for several big sites).
It looks like you have a well-established and strict process. How much is that process allowed to be sacrificed to give way to a new kind of thinking?
Are your programmers willing to throw out almost all of their OO knowledge? If not, you will end with a social problem rather than a technical one. If they are like me however, they will cheer, clap in their hands and get a constant high by working with an interesting language solving an interesting problem in a new way.
How many Erlang-experienced programmers do you have? If you have rather few, then better cut your teeth on some smaller subsystems first and then work towards the larger goal. Getting the full benefit of the system takes months if not years. Getting partial benefit can be had in weeks though.

migrate COBOL code

I have a task to convert COBOL code to .NET. Are there any converters available? I am trying to understand COBOL code in high level. I have a trouble understanding the COBOL code. Is there any flowchart generators? I appreciate any help.
Thank you..
Migrating software systems from one language or operating environment to another is always a challenge. Here are
a few things to consider:
Legacy code tends to be poorly structured as a result of a
long history of quick fixes and problem work-arounds. This really ups the signal-to-noise ratio
when trying to warp your head around what is really going on.
Converting code leads to further "de-structuring"
to compensate for mis-matches between the source and
target implementation platforms. When you start from a poorly structured base (legacy system),
the end result may be totally un-intelligible.
Documentation of the legacy architecture and/or business processes is generally so far out of
date that it is worse than useless, it may actually be misleading.
Complexity of COBOL code is almost always under estimated.
A number of "features" will be promulgated into the converted system that were originally
built to compensate for things that "couldn't be done" at one time (due to smaller memories,
slower computers etc.). Many of these may now be non-issues and you really don't want them.
There are no obvious or straight forward ways to refactor legacy process driven
systems into an equivalent object oriented system (at least not in a meaningful way).
There have been successful projects that migrated COBOL directly into Java. See naca.
However, the end result is only something its mother (or another COBOL programmer) could love, see this discussion
In general I would be suspicious of any product or tool making claims to convert your COBOL legacy
system into anything but another version of COBOL (e.g. COBOL.net). To this end you still
end up with what is essentially a COBOL system. If this approach is acceptable then you
might want to review this white paper from Micro Focus.
IMHO, your best bet for replacing COBOL is to re-engineer your system. If you ever find
a silver bullet to get from where you are to where you want to be - write a book, become
a consultant and make many millions of dollars.
Sorry to have provided such a negative answer, but if you are working with anything
but a trivial legacy system, the problem is going to be anything but trivial to solve.
Note: Don't bother with flowcharting the existing system. Try to get a handle on process input/output and program to program data transformation and flow. You need to understand the business function here, not a specific implementation of it.
Micro Focus and Fujitsu both have COBOL products that work with .NET. Micro Focus allow you to download a product trial, while the Fujitsu NetCOBOL site has a number of articles and case studies.
Micro Focus
http://www.microfocus.com/products/micro-focus-developer/micro-focus-cobol/windows-and-net/micro-focus-visual-cobol.aspx
Fujitsu
http://www.netcobol.com/products/Fujitsu-NetCOBOL-for-.NET/overview
[Note: I work for Micro Focus]
Hi
Actually, making COBOL applications available on the .NET framework is pretty straightforward (contrary to claim made in one of the earlier responses). Fujitsu and Micro Focus both have COBOL compilers that can create ILASM code for execution in the CLR.
Micro Focus Visual COBOL (http://www.microfocus.com/visualcobol) makes it particularly easy to deploy traditional, procedural COBOL as managed code with full support for COBOL data types, file systems etc. It also includes an updated OO COBOL syntax that takes away a lot of the verbosity & complexity of the syntax to be very easy to write COBOL code based on C# examples. It's unique approach also makes it easy to use all the Visual Studio tools such as IntelliSense.
The original question mentioned "convert" and I would strongly recommend against any approach that requires the source code to be converted to some other language before being used in a .NET environment. The amount of effort and risk involved is highly unlikely to be worth any benefits accrued. On the contrary, keeping the code in COBOL maintains the existing, working code and allows for the option to deploy onto other platforms in the future. For example, how about having a single set of source code and having the option to deploy into .NET as a native language and into a Java environment without changing a line of source code?
I recommend you get a trial copy of Visual COBOL from the link above and see how you can use your existing code in .NET without making any changes.
This is not an easy task. COBOL has fundamental ideas about data types that do not map well with the object-oriented .NET framework (e.g. in COBOL, all data types are represented in terms of fixed-size buffers) and in particular the way groups and arrays work do not map well to .NET classes.
I believe there are COBOL compilers that can actually compile .NET bytecode, but they would have their own runtime libraries to manage all of that. It might be worth looking at one of these compilers and simply leaving the legacy code in COBOL.
Other than that, line-by-line translation is probably not possible. Look at the code at a higher level and translate blocks of code at a time (e.g. at the procedure level or even higher).
There are a lot mechanisms how to convert COBOL to modern scalable environments, such as .NET or Java.
The first is a migration to a new environment with saving the existing COBOL code with some minor modifications (NET Microfocus COBOL);
The second is a migration to a new platform with simulation of COBOL statements and constructions. When there are some additional NET/Java libraries to simulate some specific COBOL logic:
ACCEPT goes to NETLibrary.Accept and so on.
The third approach is the most valuable one, when you migrate to "pure" NET/Java code with all the benefits of the new environment. It can be easily maintained and developed in the future.
However, the unique expertise and toolkits are required for this approach, and there are only a few players on the global market that can help you in this case.
If we are talking about automatic migration, the number of players decreases greatly and, unfortunately for you, you have to pay for the specific technologies and tools (like ours).
However, it is a better idea to invest your money in your future growth in the modern environment, than to spend your money on the "simulation" of old technologies.
Translations is not an easy task. Besides Micro Focus and Fujitsu there is also Raincode that offers a free version of Cobol that nicely integrates with Visual Studio.

Should I run my regression testing programs on both AMD and Intel chips?

Right now I plan to test on 32-bit, 64-bit, Windows XP Home, Windows XP Pro, Windows Vista Home Basic, Windows Vista Ultimate, Windows 7 Home Basic, and Windows 7 Ultimate ... all with the latest service pack.
However, now I'm wondering if it's worthwhile to test on both AMD and Intel for all the listed scenarios above or would it be a waste of time?
Note: this is a security application for everyday average users.
My feeling is that this would only be worthwhile if you had lots of on-the-edge hand-coded assembly language or some kind of incredibly tight timings (which you're not going to meet with that selection of OS anyway).
If you're using off-the-shelf commercial compilers, then you can be reasonably sure they're going to generate code which runs on all the normal processors.
Of course, nobody could ever prove they didn't need to test on a particular platform, but I would think there are bigger causes of platform difference to worry about than CPU brand (all the various multi-core/hyperthreading permutations, for example, which might expose all your multithreaded code bugs in different ways)
Only if you're programming in assembly and use extended, vender specific instruction sets. But since AMD and Intel have cross-licensing agreements in place, this is more of an historic issue than a current one.
In every other case (e.g. using a high level language) it's the job of the compiler writers to ensure the code is x86 compliant and runs on every CPU.
Oh, and except the FDIV Bug Processor vendors usually don't do mistakes.
I think you're looking in the wrong direction for testing scenarios.
Yes, it's possible that your code will work on Intel but not on AMD, or in Windows Vista Home but not in Windows Vista Professional. But unless you're doing something very closely tied to low-level programming in the first case, or to details of OS implementation in the second, the odds are small. You could say that it never hurts to test every conceivable scenario. But in real life there must be some limit on the resources available to you for testing. Testing on different processors or different OS's is, in most cases, not testing YOUR program, it's testing the compiler, the OS, or the processor. How much time do you have to spare to test other people's work? I think your time would be better spent testing more scenarios within your own code. You don't give much detail on just what your app does, but just to take one of my own examples, it would be much more productive to spend a day testing selling products our own company makes versus products we resell from other manufacturers, or testing sales tax rules for different states, or whatever.
In practice, I rarely even test deploying on Windows versus deploying on Linux, never mind different versions of Windows, and I rarely get burned on that.
If I was writing low-level device drivers or some such, that would be a different story. But normal apps? Don't waste your time.
Certainly sounds like it would be a waste of time to me - which language(s) are your programs written in?
I'd say no. Unless you are writing your application in assembler, you should be far enough removed from the processor to not need to worry about differences. The processors will support the Windows OS whose API's are what you are interefacing with(depending on the language). If you are using .NET the ONLY forseeable issue you will have is if you are using a version of the framework that those platforms don't support. Given that they are all XP or later you should be fine. If you want to worry about something make sure your application will play nicely with the Vista and later security model.
The question is probably "what are you testing". It is unlikely that any of the test is testing something that would be potentially different between AMD and Intel hardware platforms. Differences could be expected at driver level, but you do not seems to plane testing your software for every existing bit of PC hardware available around. Most probably there would be much more differences between different levels of windows service pack than between AMD and Intel processors.
I suppose it's possible there is some functionality in your code that (whether you know it or not) takes advantage of some processing/optimization in one or the other that could have a serious effect on the outcome. Keyword possible.
I would say in general you're unlikely to have to worry about it. If you're going to do it on multiple machines anyway, mix it up on them. But I wouldn't stress out about it.
I would never run all of my regression tests on both AMD and Intel unless I had specifically fixed an issue unique to one either one. That is what regression testing is.
Unit testing on the other hand... I wouldn't anticipate any difference. So again, I wouldn't bother running unit tests on both until I had actually seen an issue specific to either AMD or Intel.
If you rely on accurate / consistent floating point results, then yes, definitely.

Tool for licensing and protect my Delphi Win32 apps [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am looking a tool for protect and licensing my commercial software, Ideally must provide an SDK compatible with Delphi 7-2010, support AES encryption, Keys generator and capacity to create trial editions of my application.
I am currently evaluating ICE License. Someone has experience with this software?
Here's my list of software protection solutions. I'm looking at switching from ASProtect to another protection so I'm also in the process of analyzing most of these programs:
Themida (Oreans)
http://www.oreans.com/products.php
There are unpacking tutorials for all the versions of Themida. There is however the possibility of requesting "custom" builds which might help avoid this.
Code Virtualizer (Oreans)
http://www.oreans.com/products.php
Allows to protect specific parts of the application with a Virtual Machine. A cracker on a forum said he "made a CodeUnvirtualizer to fully convert Virtual Opcodes to Assembler Language".
EXECryptor
Very difficult to unpack. GUI does not work under Vista. Appears to no longer be developed.
ASProtect
Small protection overhead. Appears to no longer be developed.
TTProtect - $179 / $259
13 MB download. Chinese developer. Adds about xxx overhead to the exe.
http://www.ttprotect.com/en/index.htm
VMProtect - $159 / $319 (now $199/$399)
http://www.vmprotect.ru/
10 MB download. Russian developer. Seems to be updated frequently. Supports 32 and 64-bit. Uncrackable according with one exetools post, but there seems to be an unpacking tutorial already.
Enigma Protect - $149
http://enigmaprotector.com/en/home.html
7 MB download. Russian developer. Regarded as very difficult to crack. Adds about xxx overhead to the exe.
NoobyProtect - $289
http://www.safengine.com/
10.5 MB download. Chinese developer. Regarded as very difficult to crack. Adds about 1.5 MB overhead to the exe.
ZProtect - $179
http://www.peguard.com
RLPack
http://www.reversinglabs.com/products/RLPack.php
KeyGen already available.
One thing to note is that the more protection options you enable on the software protector, the bigger the possibility of the protected file being flagged by an anti-virus as a false-positive. For example, on Themida, checking the option to encrypt the file, will most likely create a few false-positives by a few anti-virus programs.
I'll update this answer once I get more replies from a hackers forum where I asked some questions about these tools.
And finally, don't use the build-in serial number/license management of these tools. Although they might be more secure than using your own, you will be tied up to that specific tool. If you decide to change software protection in the future, you will also have to manage all the customer keys transfer to a new system.
Don't bother. It's not worth the hassle. Only a perfect licensing system would actually do you any good, and there's no such thing. And in the age of the Internet, if your system isn't perfect, all it takes is for one person anywhere in the world to produce a crack and upload it somewhere, and anyone who wants a free copy of your program can get it. (And using a pre-existing library just gives them a head start on cracking it.)
If you want people to pay for your software instead of just downloading it, the one and only way to do so is to make your software good enough that people are willing to pay money for it. Anyone who tells you otherwise is lying.
I have used OnGuard (using the Delphi 2009/2010 source from SongBeamer) along with Lockbox to handle encryption with success. Both are commercial quality libraries and are free to use with full source.
I did once also use IceLicense, but switched to OnGuard/Lockbox which allowed me greater control over the key generation process which we embedded directly into our CRM system.
Of course there is no %100 bullet-proof protection suite, but having some type of protection is better than having nothing.
I worked with WinLicense in Delphi 2009 and Delphi 2010 on Windows XP and Vista. It is a good product with lots of protection options, and customizations. It provides a SDK for developers, and has nice documentation and samples. It also provides a license manager for you. They provide trial download too.
As far as I remember, they offer some customer specific versions too; that means they are willing to provide a custom-built product which is customized according to your needs, but of course that will cost more.
Since WinLicense is a well-known and popular protection suit, many crackers are after it. As you know, the more famous a tool is, the more appealing it is to crackers. But the good thing about Oreans is that they actively monitor underground forums, and provide frequent updates to their products.
So IMHO, if you are supposed to buy a prebuilt protection suite, then you'd better go for WinLicense.
A little late to the post, but check out Marx Software Security (http://www.cryptotech.com) they have a USB device with RSA & AES on chip, with network based license management.
I bought a license for ICE License in 2007. Unfortunatly (as far as I know) the component haven't been updated since June 2007. Back then a Vista compatible version was in the work but never came out of beta. I don't think they updated the component for Delphi 2009 and 2010 yet.
Ionworx is an one man company which might explain the lack of updates and lack of answer to support questions (emailed them 2-3 times since 2007 and never got back to me). They also removed their support forum from their site.
ICE License is better than nothing but I would stay away from this product because the lack of updates & support.
I investigated this a few years ago, and came to the following conclusions:
All copy protection can be broken
Nag screens on load irritate people to the point where they may stop using the product
Random nag screens can interrupt the users work flow to the point where they perceive it to be a reduction in the speed of the application
Set up compiler options, so that you have a version as a demo (perhaps with save functions removed), reduce multi user versions so that only one client can connect at a time (not using, for ex:
if connection=1 then reject
but reducing the viability for multiple connections in code)
Themida has good protection, and I think it built with Delphi too ;-)
if you have a better budget, you can look at winLicense and other tools from same company.
Have a look at this question which is pretty similar, and includes many of the tools.
Take a look at InstallShield. We've been using it for a while ourselves, and it has a lot of capabilities for trial support, licensing, and others. I don't know about key generation off the top of my head as our use doesn't require keys, but there's a lot available to you from them.
AppProtect wraps an EXE or APP file with computer unique password or Serial Number based online activation. QuickLicense is a more comprehensive tool that support all license types (trial, product, subscription, floating, etc.) and support both a wrapping approach or API to apply the license to any kind of software. Both are available from Excel Software at www.excelsoftware.com.

Datamining models in FORTRAN or C (or managed code)?

We are planning to develop a datamining package for windows. The program core / calculation engine will be developed in F# with GUI stuff / DB bindings etc done in C# and F#.
However, we have not yet decided on the model implementations. Since we need high performance, we probably can't use managed code here (any objections here?). The question is, is it reasonable to develop the models in FORTRAN or should we stick to C (or maybe C++). We are looking into using OpenCL at some point for suitable models - it feels funny having to go from managed code -> FORTRAN -> C -> OpenCL invocation for these situations.
Any recommendations?
F# compiles to the CLR, which has a just-in-time compiler. It's a dialect of ML, which is strongly typed, allowing all of the nice optimisations that go with that type of architecture; this means you will probably get reasonable performance from F#. For comparison, you could also try porting your code to OCaml (IIRC this compiles to native code) and see if that makes a material difference.
If it really is too slow then see how far that scaling hardware will get you. With the performance available through a modern PC or server it seems unlikely that you would need to go to anything exotic unless you are working with truly brobdinagian data sets. Users with smaller data sets may well be OK on an ordinary PC.
Workstations give you perhaps an order of magnitude more capacity than a standard dekstop PC. A high-end workstation like a HP Z800 or XW9400 (similar kit is available from several other manufacturers) can take two 4 or 6 core CPU chips, tens of gigabytes of RAM (up to 192GB in some cases) and has various options for high-speed I/O like SAS disks, external disk arrays or SSDs. This type of hardware is expensive but may be cheaper than a large body of programmer time. Your existing desktop support infrastructure shouldn be able to this sort of kit. The most likely problem is compatibility issues running 32 bit software on a 64-bit O/S. In this case you have various options like VMs or KVM switches to work around the compatibility issues.
The next step up is a 4 or 8 socket server. Fairly ordinary wintel servers go up to 8 sockets (32-48 cores) and perhaps 512GB of RAM - without having to move off the Wintel platform. This gives you fairly wide range of options within your platform of choice before you have to go to anything exotic1.
Finally, if you can't make it run quickly in F#, validate the F# prototype and build a C implementation using the F# prototype as a control. If that's still not fast enough you've got problems.
If your application can be structured in a way that suits the platform then you could look at a more exotic platform. Depending on what will work with your application, you might be able to host it on a cluster, cloud provider or build the core engine on a GPU, Cell processor or FPGA. However, in doing this you're getting into (quite substantial) additional costs and exotic dependencies that might cause support issues. You will probably also have to bring a third-party consultant who knows how to program the platform.
After all that, the best advice is: suck it and see. If you're comfortable with F# you should be able to prototype your application fairly quickly. See how fast it runs and don't worry too much about performance until you have some clear indication that it really will be an issue. Remember, Knuth said that premature optimisation is the root of all evil about 97% of the time. Keep a weather eye out for issues and re-evaluate your strategy if you think performance really will cause trouble.
Edit: If you want to make a packaged application then you will probably be more performance-sensitive than otherwise. In this case performance will probably become an issue sooner than it would with a bespoke system. However, this doesn't affect the basic 'suck it and see' principle.
For example, at the risk of starting a game of buzzword bingo, if your application can be parallelized and made to work on a shared-nothing architecture you might see if one of the cloud server providers [ducks] could be induced to host it. An appropriate front-end could be built to run locally or through a browser. However, on this type of architecture the internet connection to the data source becomes a bottleneck. If you have large data sets then uploading these to the service provider becomes a problem. It may be quicker to process a large dataset locally than to upload it through an internet connection.
I would advise not to bother with optimizations yet. First try to get a working prototype, then find out where computation time is spent. You can probably move the biggest bottlenecks out into C or Fortran when and if needed -- then see how much difference it makes.
As they say, often 90% of the computation is spent in 10% of the code.

Resources