Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm writing a new project, and I have a choice between using a library that only exists in OSX 10.5 and later (We're on 10.6 now), but makes my life much easier, and using a library from earlier versions, but I have to a lot more of the work myself.
How does one make this decision? How do you balance new/better technology vs customers on old systems?
ETA: does anyone know of a site that compares market share by precentage of a specific OS? Since this is a consumer product, if only 2% of mac users are still on 10.4, that sort of makes my life easy. Similarly, if 25% are still on 10.4... (I know, it's almost guaranteed to be somewhere between...)
Ask your clients - how many are on older versions of the OS?
Can you afford to lose them?
Edit: (following comment)
If you don't know what your target audience is using, you have a problem. You need to get an idea of the magnitude of how many potential customers you will not be able to serve if you go with your new library.
Having said that, shipping is a feature, so if you get the product out much quicker, you can always refactor the code to use the old libraries if you think it will gain many sales.
In general you should base your decisions like that around the interests of your paying customers. You should present the issues to them and the risks involved in each alternative and let them make the decision.
Depending upon your particular application and requirements, I would personally ship this as a major update (i.e. version 2 compared to version 1) and explicitly state that a minimum of OSX 10.5 is required.
You could still support your previous version with bug fixes, just not new features that depend on library X.
Another way to think about it is that if someone is on 10.4, then they likely haven't been an active upgrader / software purchaser for the last 3 years. So the likelihood that they will want to spend money on your software is low.
Additionally, if they really want your software, they'll upgrade to 10.5 or 10.6 and gain loads of other advantages at the same time. While that OS upgrade won't be free, it will come with so many other advantages to the customer, they might not mind.
It's also important to consider how much time and effort it will take to develop your software. If these newer libraries mean that you ship the product months earlier, or with better features, that will also pay off.
As others have said, this really boils down to whether you can afford to lose customers who aren't on 10.5 yet. That said, lots of companies seem to support the two most recent versions of OS X in their new major releases, although older versions are often available for people with older systems.
If software ownership is stable and software vendor is not pushing too hard in phasing out their own obsolete software, then there are no reasons to not support.
The problem is much worse, when vendor is passively aggressive or committed the phasing out: dead download links, dead 3rd party companies, who made the hardware/drivers/compilers/libraries, unobtainable documentation, incompatible media/installer to recover/reinstall the product.
My example: pre-2000 vs 2005, it is nearly impossible to reconstruct say.. the build process of 1 mln lines of 100% saved and mothballed Visual Studio 6.0 projects from year 1999-2001, obtain all 3rd party libraries from the era, prepare proper SDK, platform itself, all patches, make results binary identical. No way.
But it pretty much works for Studio 2005.
You need to talk to both sales and support, and let them judge what the impact will be.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We currently have a large business-critical application written in COBOL, running on OpenVMS (Integrity/Itanium).
As the months pass, there is more and more speculation about the lifetime of the Itanium architecture. Nothing is said out in the open, of course, but articles like this and this paint a worrying picture. Although I can find nothing official to support this, there are even murmurings in the corridors of our company of HP ditching OpenVMS and HP COBOL along with it.
I cannot believe that we are alone in this.
The way I see it, there are a few options:
Emulate some old hardware and run the application on that using a product like CHARON-VAX or CHARON-AXP. The way I see it, the pros are that the process should be relatively painless, especially if the 64-bit (AXP) option is used. Potential cons are a degradation in performance (although this should be offset by faster and faster hardware);
Port the HP COBOL-based application to a more modern dialect of COBOL, such as Visual COBOL. The pros, then, are the fact that the porting effort is relatively low (it's still COBOL) and the fact that one can run the application on a Unix or Windows platform. The cons are that although you're porting COBOL, the fact that you're porting to a different operating system could make things tricky (esp. if there are OpenVMS-specific dependencies);
Automatically translate the COBOL into a more modern language like Java. This has the obvious benefit of immediately freeing one from all the legacy issues in one fell swoop: hardware support, operating system support, and especially finding administrators and programmers. Apart from this being a big job, an obvious downside is the fact that one will end up with non-idiomatic Java (or whatever target language is ultimately chosen); arguably, this is something that can be ameliorated over time.
A rewrite, from the ground up (naturally, using modern technologies). Anyone who has done this knows how expensive and time-consuming it is. I've only included it to make the list complete :)
Note that there is no dependency on a proprietary DBMS; the database is ISAM file-based.
So ... my question is:
What are others faced with the imminent obsolescence of Itanium doing to maintain business continuity when their platform of choice is OpenVMS and COBOL?
UPDATE:
We have had an official assurance from our local HP representative that Integrity/Itanium/OpenVMS will be supported at least up until 2022. I guess this means that this whole issue is less about the platform, and more about the language (COBOL).
The main problem with this effort will be the portions of the code that are OpenVMS specific. Most applications developed on OpenVMS typically use routines and procedures that are not easily ported to another platform. Rather that worry about specific language compatibility, I would initially focus on the runtime routines and command procedures used by the application.
An alternative approach may be to continue to use the current application while developing a new one or modifying a commercially available application to suit your needs. While the long term status of Itanium is in question, history indicates that OpenVMS will remain viable for some time to come. There are still VAX machines being used today for business critical applications. The fact that OpenVMS and its hardware is stable is the main reason for its longevity.
Dan
Looks like COBOL is the main dependency that keeps you worried. I undrestand Itanium+OpenVMS in this picture is just a platform.
You're definitely not alone running mission-critical stuff on OpenVMS. HP site has OpenVMS roadmap (both Alpha and Integrity), support currently stretches to 2015. Oracle seems trying to leverage it's SUN asset in different domains recently.
In any case, if your worries are substantial (sure we all worried about COMPAQ, then HP, vax>>alpha>>Itanium transitions in the past), there's time to un-tie the COBOL dependency.
So I would look now into charting out migration path from COBOL onto more portable language of choice (eg. C/C++ ANSII without platform extensions). Perhaps Java isn't the frendliest choice, given Oracle's activity. Re-write, how unpleasant it is, will be more progressive and likely will streamline the whole process. The sooner one starts, the sooner one completes.
Also, in addition to emulators, there're still plenty of second-hand hardware. Ironically, one company I know just now phases-in Integrity platforms to supplant misson-critical Alphas -- I guess, it's "corporate testing requirements"...
Do-nothing is an option as well, though obviously riskier: OpenVMS platforms are proven to be dependable, so alternatively, finding a reliable third-party support company may extend your future hardware contingency.
This summer's Rolling Roadmap makes porting off OpenVMS look like an excellent idea.
Given how much COBOL exists in the world finding people to support COBOL will not be a problem for the foreseeable future. As noted above there are COBOL compilers on other platforms. The problem lies in the OpenVMS system service calls and DEC language extensions your application uses. You do not mention where your data is stored, so worst case your COBOL uses RMS. There is a company that provides an implementation of many OpenVMS system services on Linux and the Unixes. Not needing to replace those services while porting to another operating system may reduce the complexity. Check out Sector7.com.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am looking a tool for protect and licensing my commercial software, Ideally must provide an SDK compatible with Delphi 7-2010, support AES encryption, Keys generator and capacity to create trial editions of my application.
I am currently evaluating ICE License. Someone has experience with this software?
Here's my list of software protection solutions. I'm looking at switching from ASProtect to another protection so I'm also in the process of analyzing most of these programs:
Themida (Oreans)
http://www.oreans.com/products.php
There are unpacking tutorials for all the versions of Themida. There is however the possibility of requesting "custom" builds which might help avoid this.
Code Virtualizer (Oreans)
http://www.oreans.com/products.php
Allows to protect specific parts of the application with a Virtual Machine. A cracker on a forum said he "made a CodeUnvirtualizer to fully convert Virtual Opcodes to Assembler Language".
EXECryptor
Very difficult to unpack. GUI does not work under Vista. Appears to no longer be developed.
ASProtect
Small protection overhead. Appears to no longer be developed.
TTProtect - $179 / $259
13 MB download. Chinese developer. Adds about xxx overhead to the exe.
http://www.ttprotect.com/en/index.htm
VMProtect - $159 / $319 (now $199/$399)
http://www.vmprotect.ru/
10 MB download. Russian developer. Seems to be updated frequently. Supports 32 and 64-bit. Uncrackable according with one exetools post, but there seems to be an unpacking tutorial already.
Enigma Protect - $149
http://enigmaprotector.com/en/home.html
7 MB download. Russian developer. Regarded as very difficult to crack. Adds about xxx overhead to the exe.
NoobyProtect - $289
http://www.safengine.com/
10.5 MB download. Chinese developer. Regarded as very difficult to crack. Adds about 1.5 MB overhead to the exe.
ZProtect - $179
http://www.peguard.com
RLPack
http://www.reversinglabs.com/products/RLPack.php
KeyGen already available.
One thing to note is that the more protection options you enable on the software protector, the bigger the possibility of the protected file being flagged by an anti-virus as a false-positive. For example, on Themida, checking the option to encrypt the file, will most likely create a few false-positives by a few anti-virus programs.
I'll update this answer once I get more replies from a hackers forum where I asked some questions about these tools.
And finally, don't use the build-in serial number/license management of these tools. Although they might be more secure than using your own, you will be tied up to that specific tool. If you decide to change software protection in the future, you will also have to manage all the customer keys transfer to a new system.
Don't bother. It's not worth the hassle. Only a perfect licensing system would actually do you any good, and there's no such thing. And in the age of the Internet, if your system isn't perfect, all it takes is for one person anywhere in the world to produce a crack and upload it somewhere, and anyone who wants a free copy of your program can get it. (And using a pre-existing library just gives them a head start on cracking it.)
If you want people to pay for your software instead of just downloading it, the one and only way to do so is to make your software good enough that people are willing to pay money for it. Anyone who tells you otherwise is lying.
I have used OnGuard (using the Delphi 2009/2010 source from SongBeamer) along with Lockbox to handle encryption with success. Both are commercial quality libraries and are free to use with full source.
I did once also use IceLicense, but switched to OnGuard/Lockbox which allowed me greater control over the key generation process which we embedded directly into our CRM system.
Of course there is no %100 bullet-proof protection suite, but having some type of protection is better than having nothing.
I worked with WinLicense in Delphi 2009 and Delphi 2010 on Windows XP and Vista. It is a good product with lots of protection options, and customizations. It provides a SDK for developers, and has nice documentation and samples. It also provides a license manager for you. They provide trial download too.
As far as I remember, they offer some customer specific versions too; that means they are willing to provide a custom-built product which is customized according to your needs, but of course that will cost more.
Since WinLicense is a well-known and popular protection suit, many crackers are after it. As you know, the more famous a tool is, the more appealing it is to crackers. But the good thing about Oreans is that they actively monitor underground forums, and provide frequent updates to their products.
So IMHO, if you are supposed to buy a prebuilt protection suite, then you'd better go for WinLicense.
A little late to the post, but check out Marx Software Security (http://www.cryptotech.com) they have a USB device with RSA & AES on chip, with network based license management.
I bought a license for ICE License in 2007. Unfortunatly (as far as I know) the component haven't been updated since June 2007. Back then a Vista compatible version was in the work but never came out of beta. I don't think they updated the component for Delphi 2009 and 2010 yet.
Ionworx is an one man company which might explain the lack of updates and lack of answer to support questions (emailed them 2-3 times since 2007 and never got back to me). They also removed their support forum from their site.
ICE License is better than nothing but I would stay away from this product because the lack of updates & support.
I investigated this a few years ago, and came to the following conclusions:
All copy protection can be broken
Nag screens on load irritate people to the point where they may stop using the product
Random nag screens can interrupt the users work flow to the point where they perceive it to be a reduction in the speed of the application
Set up compiler options, so that you have a version as a demo (perhaps with save functions removed), reduce multi user versions so that only one client can connect at a time (not using, for ex:
if connection=1 then reject
but reducing the viability for multiple connections in code)
Themida has good protection, and I think it built with Delphi too ;-)
if you have a better budget, you can look at winLicense and other tools from same company.
Have a look at this question which is pretty similar, and includes many of the tools.
Take a look at InstallShield. We've been using it for a while ourselves, and it has a lot of capabilities for trial support, licensing, and others. I don't know about key generation off the top of my head as our use doesn't require keys, but there's a lot available to you from them.
AppProtect wraps an EXE or APP file with computer unique password or Serial Number based online activation. QuickLicense is a more comprehensive tool that support all license types (trial, product, subscription, floating, etc.) and support both a wrapping approach or API to apply the license to any kind of software. Both are available from Excel Software at www.excelsoftware.com.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I recently told a friend that I was starting to learn Catalyst (Perl) and he fairly strongly emphasized that because Catalyst has so freakin' many dependencies, I should use something like Rails instead.
Isn't that a good thing that there are a lot of dependencies? Doesn't that indicate a lot of code re-use? I understand that there might be more effort involved with installing the framework but are there any other disadvantages?
I will resume my Catalyst tutorial until I get some juicy responses. :-)
There is nothing particularly wrong with this. The advantage of Catalyst is that its pieces can be used by people not using all of Catalyst. This means that there are more eyes looking at, and fixing bugs in, the critical parts.
The biggest complain I hear of is that it's annoying to watch all those messages go by in the CPAN shell as Catalyst is installing. The solution is to take advantage of your OS's package manager as you are getting started. On Debian, apt-get install libcatalyst-perl takes 15 seconds to install on a machine with no other Perl modules installed. 15 seconds. (A plain CPAN install is not difficult either, but I guess the standard CPAN shell asks you a lot of dumb questions, and that puts off the newbies.)
Don't worry about the dependencies, there are good tools for managing them, and they make the framework stronger and more flexible.
This is a subject I've seen postings about before. I've been meaning to write an article about it and have finally done so.
It is here: The Lie of Independence
I encourage you to read it. The gist is simple, though. The question is wrong. It's not 'Do you use an application or framework with lots of dependencies, or one that doesn't have them?'
It is: 'Do you use an application or framework that has lots of external dependencies, or one that tries to do it all internally?'
And the question that follows is 'Do you really have faith that the person or people writing this framework understand every nuance of every tiny detail of every task that needs to be done to process a web request?'
When there are version dependencies between components, you can find yourself backed into a non-working situation if you're forced to upgrade one component (say, for security reasons) before a compatible version of a dependent component is available.
That assumes you can get into a working state in the first place. It may be that if you try to use the current versions of all dependencies, you'll find that they don't play along.
The larger the number of dependencies, the greater the risk.
Rails isn't free of this problem, either. With every new Ruby release, there's a scramble to update instructions for how to get, say, database drivers built.
To be fair, this problem has trended towards "better" over time, albeit with bumps in the road.
My personal experience is that the more dependencies you have, the more versioning you have to keep track of, and this can lead to nightmarish situations. In particular, updating one dependency (due to a bug you want fixed, for example) can bring you compatibility issues with the other dependencies. As an example, I personally had a situation where gcc 4.0.3 worked with foo but not with bar (dependency of foo), and gcc 4.0.5 worked with bar but not with foo. Fortunately, 4.0.2 worked.
Also, more dependencies tend to point out at "Frankenstein's monsters" products, made of parts that were not designed upfront to play together. A well integrated framework is designed to play nice and consistent. This, of course, can be fixed by proper wrapping the differences.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
I have a discussion point about Codegear's licensing.
Delphi 2009 is sold (more correct: licensed) under two different types of licence:
Commercial license
Academic license
The Commercial license (full and upgrade) is much more expensive than the academic one!
The commercial license has the drawback of the higher price, but its advantage is that commercial applications can be made.
The acedemic licence has the advantage of the low price, but there is a catch: you have to prove that you are a scholar, student or a teacher! Or else you won't obtain your license! The non-commercial nature of this license is a non-issue!
I'd like to see a third one:
-- Non-Commercial license
This license should be as low in price or somewhat higher and the license cannot be used commercially. The license should be tied to the person who purchased it like the already existing types of license.
Does this license have advantages:
Hobbyists have access to Delphi, C++ Builder and other Codegear software.
Illegal usage might be decreasing due to the more affordable pricing.
It's an ideal license for creating and maintaining opensource software with the latest Delphi.
What do you think about this matter?
Turbo Delphi is free to use.
What do you think about this matter?
1) Open Source is commercial. You can sell open source software.
2) Hobbyists can sell software too... and I can't see why hobbiysts are willingly to spend lots of dollars for a camera, a guitar, a bike, whatever you like but can't spend $450 for an IDE - just because you can't copy a camera or a bike??
3) Most people would buy the "non-commercial" version and develop commercial software anyway - how could CodeGear track it? Tracking costs money, and can offset any earning.
4) Illegal usage won't decrease but for 3) - people using illegal software don't like to pay even $30.
There previously was a Personal license that fit that niche. Also the Turbo and Turbo Explorer versions fit that niche. The issue is there are 4 groups:
Buys based on features first, price second (enterprise, etc.)
Buys on price not features - needs to be cheap, preferably free (hobbiest, etc.)
Only for free, with no qualms about licensing (pirates.)
The 3rd group is characterized by the fact the pirate the Architect version when they are only using features in the Professional version (or free version if one existed). They will never convert to a paying customer, although may convert to a free version if it has all the features they want (although unlikely.)
The issue with trying to maximize the 2nd group (turn all of them into customers) is you don't want to move people from the 1st group. If someone is buying based on features, and you offer a lower featured version for less money, they may be happy with that version and just buy it. Why not save money?
Non-Commercial is too nebulous of a license as has been pointed out by others. If you cripple the features too much then it is a wasted effort to make the offer since no one will want it and reflect poorly on the professional version. The only thing that would work is a nag-screen, but that would be really annoying too, and by the very nature of the users would be easily bypassed.
So the bottom line is the money that keeps a company afloat is in maximizing the 1st group. Attempting to appease the 2nd and 3rd groups can actually result in lost money. Although I agree that if they want to target more hobbiests then they really need a free / low-cost offering (an updated Turbo and Turbo Explorer).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm looking for a consensus for one or the other
Beta; Release software that is semi-
functional to a limited audience and let users guide the
process until an application is
complete.
No Beta; Define all functionality
based on previous user-feedback
and/or business logic, build the
software, and test it with whatever
time you give yourself (or are given).
Hmmm...Beta is for a product you think is finished...until you give the Beta users to play with it and they break it in ways you haven't thought of.
So, yes, go with Beta, unless you want to burn your product on many users.
I disagree with both of your cases.
How about internal QA? They find bugs too, you know. Only release a beta to customers when you have no known serious bugs.
Also, always test adequately. If time is pressing and you haven't finished all your tests, then that's just tough. Finish testing, whether you have time or not.
My answer is that there are many factors that would determine which makes more sense:
1) Consequences of bugs - If bugs would result in people dying, then I think a beta would be a bad idea. Could you imagine running beta software on a nuclear reactor or on a missile system? On the other hand, if the consequence is fairly minor like if there is a temporary outage on some fantasy sports site, that may not be so bad to put out in beta.
2) Expectations of users. Think about the users of the application and how would they feel about using something that is a "beta"? If this would make them scared to actually use the software and be afraid that it is going to blow up on them regularly and be riddled with bugs, that may also play a role.
3) Size of the application. If you are going to build something very large like say an ERP to handle the legal requirements of 101 countries and contain hundreds of add on modules, then a beta may be more sound than trying to get it all done and never get to where you have customers.
4) Deployment. If you are setting up something where the code is run on your own machines and can easily be upgraded and patched, then a beta may be better than trying to get it all done right in the beginning.
5) Development methodology. If you take a waterfall approach, then no beta is likely a better option, while in an agile scenario a beta makes much more sense. The reason for the latter is that in the agile case there will be multiple releases that will improve the product over time.
Just a few things I'd keep in mind as there are some cases where I easily imagine using betas and in other cases I'd avoid betas as much as possible.
Undoubtedly beta!
Some benefits of running a beta period ...
Improve product quality, usability, and performance
Uncover bugs
Gain insight into how customers use your product
Gauge response to new features
Collect new feature requests
Estimate product support requirements
Identify customers
Earn customer testimonials
Prepare for final release
Generate buzz for product release
Launch quickly and iterate often.
Without knowing what the app is and what it's audience will be, I think that would be a hard choice to make. However if it's an Open Source project, it seems like the consensus is usually "Release Early and Release Often".
The real question is this: Do you know exactly what your users what?
If you answer yes, then design and build everything and launch it with no Beta.
If you answer no, then Beta is the way to go - your users will help define your software and they will feel more a part of the process and have some ownership.
I say Beta. But I disagree with your premise. You should never release semi-anything software. The software released as Beta should at least be feature complete. This way your users know what they're getting into.
As far as finding bugs. Internal testing is best. However, anyone that has released software knows that no matter what the user will find a new and interesting way to break it.
So Beta 'til your hearts content.
I would suggest alpha testing first, to a select group, that is not feature complete, so they can help you find bugs and determine which features are needed.
Once you get what is thought to be feature complete, then release it to a larger group, mainly to find bugs, and to get comments on feature changes, but you may not make feature changes unless there is something critical.
At this point you are ready for release, and you go back to step (1) for the next release.
After finishing (you think) your software, and believe that there are no serious bugs, conduct alpha testing. Make the software available within the test staffs of your company, fix the bugs reported.
Next, release the software to customers as beta testing, collect comments, fix bugs & improve features.
Only then you're ready for release.
Neither.
You shouldn't release a beta until you feel that the product is bug-free. And at that point, the beta users will find bugs. They will, trust me. As far as letting the beta users 'guide the design', probably not a good idea - you'll end up with software that'll look like the car that Homer Simpson designed. There's a reason why beta users aren't software designers.
As far as the second option - unless you are an extraordinary tester, you will not be able to test it as well as end-users (they will do the silliest things to your software). And if you "test it with whatever time you give yourself (or are given)", you will not have enough time.