Why does FogBugz require that the DEP is turned off? [closed] - fogbugz

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 14 years ago.
Improve this question
I am really wondering why FogBugz when installed locally insists that DEP is turned off?

FogBugz 6 (and earlier) requires that Data Execution Prevention (DEP) be disabled on versions of Windows that have DEP, because of a third-party COM component that we use for parsing email. We will fix this in the next major release of FogBugz: FogBugz will no longer use this third-party component (in fact, the next version of FogBugz will not use any COM components).

Turn it on and see where it crashes with a debugger :) I ran across some COM components that would execute some code from a data block that triggered a DEP exceptions. I would be willing to guess FogBugz is also accessing some native components somewhere that are doing the same.

Code that attempts to patch or insert hook into other modules within it's address space often won't work unless DEP is disabled, or the appropriate memory protection options are set for the installed hook.
This is a common technique with some frameworks (eg Delphi) where 'patches' are applied dynamically ar run-time to fix bugs that the vendor has not yet address.

I just don't like the idea to have DEP turned OFF in a Server environment because a modern state of the art software just can't handle it. Specially since it is the only software that I have tried over the years that have required it.
Its during the installation that I came across the DEP alert.
As the FogBugz link above says, " Be warned, however, that FogBugz will not function properly with DEP enabled.".

Not knowing the specifics of FogBugz, but...
The most common reason for turning off DEP is programmatically generated thunks on the stack or the heap. The Windows Kernel emulates the most common ones but "most common" wasn't very good coverage.
The second most common reason for turning off DEP is incorrectly linked code segments that appear to be data segments.
The third most common reason is machine code in strings. In general, this is really bad style but sometimes on Windows it cannot be helped.
The fourth most common reason is some algorithm in the code assumed the stack layout. DEP messes with that.
Or perhaps the program really is running code on a heap buffer.

Related

Moving away from Itanium [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We currently have a large business-critical application written in COBOL, running on OpenVMS (Integrity/Itanium).
As the months pass, there is more and more speculation about the lifetime of the Itanium architecture. Nothing is said out in the open, of course, but articles like this and this paint a worrying picture. Although I can find nothing official to support this, there are even murmurings in the corridors of our company of HP ditching OpenVMS and HP COBOL along with it.
I cannot believe that we are alone in this.
The way I see it, there are a few options:
Emulate some old hardware and run the application on that using a product like CHARON-VAX or CHARON-AXP. The way I see it, the pros are that the process should be relatively painless, especially if the 64-bit (AXP) option is used. Potential cons are a degradation in performance (although this should be offset by faster and faster hardware);
Port the HP COBOL-based application to a more modern dialect of COBOL, such as Visual COBOL. The pros, then, are the fact that the porting effort is relatively low (it's still COBOL) and the fact that one can run the application on a Unix or Windows platform. The cons are that although you're porting COBOL, the fact that you're porting to a different operating system could make things tricky (esp. if there are OpenVMS-specific dependencies);
Automatically translate the COBOL into a more modern language like Java. This has the obvious benefit of immediately freeing one from all the legacy issues in one fell swoop: hardware support, operating system support, and especially finding administrators and programmers. Apart from this being a big job, an obvious downside is the fact that one will end up with non-idiomatic Java (or whatever target language is ultimately chosen); arguably, this is something that can be ameliorated over time.
A rewrite, from the ground up (naturally, using modern technologies). Anyone who has done this knows how expensive and time-consuming it is. I've only included it to make the list complete :)
Note that there is no dependency on a proprietary DBMS; the database is ISAM file-based.
So ... my question is:
What are others faced with the imminent obsolescence of Itanium doing to maintain business continuity when their platform of choice is OpenVMS and COBOL?
UPDATE:
We have had an official assurance from our local HP representative that Integrity/Itanium/OpenVMS will be supported at least up until 2022. I guess this means that this whole issue is less about the platform, and more about the language (COBOL).
The main problem with this effort will be the portions of the code that are OpenVMS specific. Most applications developed on OpenVMS typically use routines and procedures that are not easily ported to another platform. Rather that worry about specific language compatibility, I would initially focus on the runtime routines and command procedures used by the application.
An alternative approach may be to continue to use the current application while developing a new one or modifying a commercially available application to suit your needs. While the long term status of Itanium is in question, history indicates that OpenVMS will remain viable for some time to come. There are still VAX machines being used today for business critical applications. The fact that OpenVMS and its hardware is stable is the main reason for its longevity.
Dan
Looks like COBOL is the main dependency that keeps you worried. I undrestand Itanium+OpenVMS in this picture is just a platform.
You're definitely not alone running mission-critical stuff on OpenVMS. HP site has OpenVMS roadmap (both Alpha and Integrity), support currently stretches to 2015. Oracle seems trying to leverage it's SUN asset in different domains recently.
In any case, if your worries are substantial (sure we all worried about COMPAQ, then HP, vax>>alpha>>Itanium transitions in the past), there's time to un-tie the COBOL dependency.
So I would look now into charting out migration path from COBOL onto more portable language of choice (eg. C/C++ ANSII without platform extensions). Perhaps Java isn't the frendliest choice, given Oracle's activity. Re-write, how unpleasant it is, will be more progressive and likely will streamline the whole process. The sooner one starts, the sooner one completes.
Also, in addition to emulators, there're still plenty of second-hand hardware. Ironically, one company I know just now phases-in Integrity platforms to supplant misson-critical Alphas -- I guess, it's "corporate testing requirements"...
Do-nothing is an option as well, though obviously riskier: OpenVMS platforms are proven to be dependable, so alternatively, finding a reliable third-party support company may extend your future hardware contingency.
This summer's Rolling Roadmap makes porting off OpenVMS look like an excellent idea.
Given how much COBOL exists in the world finding people to support COBOL will not be a problem for the foreseeable future. As noted above there are COBOL compilers on other platforms. The problem lies in the OpenVMS system service calls and DEC language extensions your application uses. You do not mention where your data is stored, so worst case your COBOL uses RMS. There is a company that provides an implementation of many OpenVMS system services on Linux and the Unixes. Not needing to replace those services while porting to another operating system may reduce the complexity. Check out Sector7.com.

Considering Porting App from .NET to Erlang - need advice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking at Erlang for a future version of a distributed soft-real-time hosted web-based telephony app (i.e. Erlang looks like absolutely the perfect choice for this kind of app). I come from a .NET background and the current version of this app uses a combination of C#, WCF and JQuery to deliver the service. I now need Erlang to allow me to add extra 9s to my up-time and to allow me to get more bang for my server bucks.
Previously I'd set up a development process here combining VS.NET, GIT, TeamCity and auto-deployment of MSI files to the various environments we maintain. It's not perfect, but we're all now pretty comfortable with it. I'm wondering whether a process like we have is even appropriate for such a radically different technology stack (LYME)?
I'm confident that all of the programming challenges we previously solved using .NET can be better solved in less code with Erlang, so I'm completely sold on the language choice. What I don't yet understand from reading the Pragmatic and O'Reilly books on Erlang, is how I should adapt my software engineering and application life-cycle management (ALM) processes to suit the new platform. I see that in-place code updates could make my (and my testing and ops team's) life much easier (compared to the god-awful misery of trying to deploy MSI files across a windows network) but I am not sure how things should change when I use Erlang.
How would you:
do continuous integration in Erlang (is it commonly used?)
use it during a QA cycle (we often run concurrent topic branches using GIT, that get their own mini-QA cycle, so they all get deployed into a test environment)
build and distribute your code to DEV, TEST, UAT, STAGING, and PROD environments
integrate code generation phases into your build cycle (we currently use MSBUILD + T4 templates)
centralize logging for a bunch of different servers (we currently use Log4Net, MSMQ, etc)
do alerting with tools like SCOM
determine whether someone/something has misconfigured your production servers
allow production hot-fixes only after adequate QA (only by authorized personnel)
profile the performance (computation and communication) of your apps
interact with windows-based active directory servers
I guess I need to know what worked for you and why! What tools and frameworks did you use? What did you try that failed? What would you do differently if you could start over, knowing what you know now?
Whoa, what a long post. First, you should be aware that the 99.9% and better kool-aid is a bit dangerous to drink while blind. Yes, you can get some astounding stability figures, but you need to write your program in a way facilitating this. It does not come for free. It does not happen by magic either. Your application must be designed in a way such that other subsystems recover. OTP will help you a lot - but it still takes time to learn.
Continuous integration: Easily done. If you can call rebar or make through your build-bot you are probably set here already. Look into eunit, cover and Erlang QuickCheck (the mini variant is free for starters) - all can be run from rebar.
QA Cycle: I have not had any problems here. Again, if using rebar you can build embedded releases that are minimized erlang vm's you can copy anywhere and run (they are self-contained). You can even hot deploy fixes to such a system pretty easily by altering the code path a bit so you have an overlay of newer fixes. Your options are numerous. Git already help you here a lot.
Environmentalization: Easily done.
Logging centralization: Look into SASL and the error_logger. You can do anything you want here.
Alerting: The system can be probed for all you need (introspection is strong in Erlang). But you might have to code a bit to hook it up to the system of your choice.
Misconfiguration: Configuration files are Erlang terms. If it can be computed, it can be done.
Security: Limit who has access. It is a people problem, not a technical one in my opinion.
Profiling: cprof, cover, eprof, fprof, instrument + a couple of distributed systems for doing the same. Random sampling is also easy (introspection is strong in Erlang).
Windows interaction: Dunno. (Bias: last time I used windows professionally was in 1998 or so).
Some personal observations:
Your largest problem might end up being that you try to cram Erlang into your existing process and it might resist. It is a new environment, so new approaches will be needed in places and you should expect to adapt and workaround limitations you find along the way. The general consensus is that it can work (it is working for several big sites).
It looks like you have a well-established and strict process. How much is that process allowed to be sacrificed to give way to a new kind of thinking?
Are your programmers willing to throw out almost all of their OO knowledge? If not, you will end with a social problem rather than a technical one. If they are like me however, they will cheer, clap in their hands and get a constant high by working with an interesting language solving an interesting problem in a new way.
How many Erlang-experienced programmers do you have? If you have rather few, then better cut your teeth on some smaller subsystems first and then work towards the larger goal. Getting the full benefit of the system takes months if not years. Getting partial benefit can be had in weeks though.

How do I run Delphi 7 on Windows 7 without disabling UAC? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I had the bad idea of switching to Windows 7 (32 bit) and now my old Delphi 7 won't work properly. Actually it worked just fine until yesterday but (I suppose) after some MS Windows updates, it crashes if I double click a DPR file. However, it works if I run as administrator or if I start Delphi IDE without double clicking a DPR file (and than loading that DPR). So, obviously it is a UAC issue. I am really pissed of that I switched to Win 7 which is not most different (better) than Win XP. If I have to switch off UAC (and with it the only big improvement that Win 7 brings - the security) than for real I will have no advantage from Win 7.
So, how to make Delphi work without disabling the UAC?
I hope that other people that had this problem found a solution.
:)
Update:
I have tried already to give Delphi rights to write in its "c:\Program File\Borland\Delphi folder". No luck.
I don't want to run it in Admin mode (this includes XP mode) since it will be running at a different level. Some API calls will not fail (since it is running in admin mode). Drag and drop from non-admin program and other similar features will also not work.
Security is not a problem. I don't mindlessly download any piece of software I get from random people (read spammers) via email or from obscure web site so I don't get virused. Oh... and I don't use IE for browsing :)
Try installing Delphi outside %program files%. It's the best bet for software that wasn't designed with UAC (or guidelines on where to store user data since about NT4) in mind.
I am running Delphi 5 and 7 on Windows 7 that way, no problems thus far.
The best solution is the XP Mode of Windows 7 Professional. I recommend to convert and use the XP Mode VM with VMware Player. Then it is fast and reliable.
Update: So in fact it's not the XP Mode itself, which I recommend, but the XP license which goes along with it. You can duplicate it as many times you need, but (of course) use only one instance at a time.
Solved. It was a DDE problem.
I just deleted the ddeexec key associated with Delphi projects.
Easiest way install it as administrator ,or search google for the appropriate file if you are still having problems then the folder is probably set to read only so you must change that.
This actually works this is how i made mine work

What's wrong with using a framework that has a lot of dependencies? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I recently told a friend that I was starting to learn Catalyst (Perl) and he fairly strongly emphasized that because Catalyst has so freakin' many dependencies, I should use something like Rails instead.
Isn't that a good thing that there are a lot of dependencies? Doesn't that indicate a lot of code re-use? I understand that there might be more effort involved with installing the framework but are there any other disadvantages?
I will resume my Catalyst tutorial until I get some juicy responses. :-)
There is nothing particularly wrong with this. The advantage of Catalyst is that its pieces can be used by people not using all of Catalyst. This means that there are more eyes looking at, and fixing bugs in, the critical parts.
The biggest complain I hear of is that it's annoying to watch all those messages go by in the CPAN shell as Catalyst is installing. The solution is to take advantage of your OS's package manager as you are getting started. On Debian, apt-get install libcatalyst-perl takes 15 seconds to install on a machine with no other Perl modules installed. 15 seconds. (A plain CPAN install is not difficult either, but I guess the standard CPAN shell asks you a lot of dumb questions, and that puts off the newbies.)
Don't worry about the dependencies, there are good tools for managing them, and they make the framework stronger and more flexible.
This is a subject I've seen postings about before. I've been meaning to write an article about it and have finally done so.
It is here: The Lie of Independence
I encourage you to read it. The gist is simple, though. The question is wrong. It's not 'Do you use an application or framework with lots of dependencies, or one that doesn't have them?'
It is: 'Do you use an application or framework that has lots of external dependencies, or one that tries to do it all internally?'
And the question that follows is 'Do you really have faith that the person or people writing this framework understand every nuance of every tiny detail of every task that needs to be done to process a web request?'
When there are version dependencies between components, you can find yourself backed into a non-working situation if you're forced to upgrade one component (say, for security reasons) before a compatible version of a dependent component is available.
That assumes you can get into a working state in the first place. It may be that if you try to use the current versions of all dependencies, you'll find that they don't play along.
The larger the number of dependencies, the greater the risk.
Rails isn't free of this problem, either. With every new Ruby release, there's a scramble to update instructions for how to get, say, database drivers built.
To be fair, this problem has trended towards "better" over time, albeit with bumps in the road.
My personal experience is that the more dependencies you have, the more versioning you have to keep track of, and this can lead to nightmarish situations. In particular, updating one dependency (due to a bug you want fixed, for example) can bring you compatibility issues with the other dependencies. As an example, I personally had a situation where gcc 4.0.3 worked with foo but not with bar (dependency of foo), and gcc 4.0.5 worked with bar but not with foo. Fortunately, 4.0.2 worked.
Also, more dependencies tend to point out at "Frankenstein's monsters" products, made of parts that were not designed upfront to play together. A well integrated framework is designed to play nice and consistent. This, of course, can be fixed by proper wrapping the differences.

Delphi code completion performance [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a few large (~600k lines of code) Delphi projects. They include some custom components which our team has developed.
Often, when I call up code completion with ctrl+space or just by pressing ".", the IDE locks up and thinks really hard for a long time. Sometimes the delay can be a full minute, or more. Other times, it pops up immediately with suggestions.
What factors influence the performance of intellisense in Delphi? Is there any way I can improve this performance?
My best solution so far is to turn off the automatic completion, and use ctrl+space when I need to meditate quietly for a minute or so.
I can't help but mention that VS2005, VS2008, and XCode all seem to give virtually instant intellisense feedback (although I've never tried it on a project this large).
As an alternative, I've offered this suggestion.
Delphi Code Insight invokes the compiler dll to do a custom compile when the user requests Code Insight (Ctrl+Space, '.', etc). This custom compile does a build in the unit and skips over codegen, linking, etc until it reaches your current offset in the file buffer. With this in mind, the unit list that the compiler sees before it gets to your current position will play a large factor in determining the speed of the Code Insight operation. There may be a unit (or multiple units) that are causing a hefty file system dependency, etc. It's quite possible that reordering the uses clause, refactoring the uses clause to be in multiple files, or removing units in the uses clause that aren't necessary for your current unit to compile may improve performance. Additionally, using packages or shortening your unit search path may improve CI response time.
Be sure to explicitly include all the units(*) used by your project in the dpr.
Do not rely on the search path to find a unit called from another unit, add it to the dpr. The dpr will be much longer but all the compilation related things will be faster, including code-insight.
(*) not the units of the installed components.
I don't know which version you are using, but much faster code completion is one of the things I like most about Delphi 2009.
This is a long-standing issue with Delphi, and I had to resort to turning off automatic completion. After working that way for a while, I was very happy with it. Even if it only takes a fraction of a second, having the IDE lag my typing was disconcerting and interrupted my flow. Much nicer with the automatics off, IMO.
I just came across this problem myself, I fixed it by removing a dead network link from my environment library path. Solved my issue 100%.
Do you include the sources directories for your teams custom components to be in the library path? It would be interesting to see the speed difference if only the component DCU files are in the library path, versus having the source files there too.

Resources