Suppress DLL messages or callback function - delphi

I plan to use https://github.com/risoflora/brookframework as an embedded static server in a Delphi application. It requires https://github.com/risoflora/libsagui and seems very fast. But I can't find an option to suppress messages coming from the libsagui DLL.
A common scenario is when the desired port can't be bound, I prefer to get an exception or some error handler callback, rather than get a MessageBox which can't be controlled by my application.
Any info or suggestion?

I'm going to take a flying leap here and just toss out what occurs to me at the moment. First, DLLs are not supposed to have ANY sort of UI activity. If someone does that deliberately and not because they just don't know, then it usually signals there's a serious problem afoot in the run-time activity.
Second, assuming that this signals a serious problem, then there's likely to be some mechanism in place to avoid it. This is for an embedded situation, so one might ask "why can't a desired port be bound" if the person doing the embedding is also responsible for configuring the overall embedded environment? One does not leave things undefined or unconfigured in embedded situations or the application just blows up at some point. So the obvious question is, why are YOU (as the person who configured the embedded environment) writing code that's trying to connect to unavailable ports? And why are YOU also complaining that the underlying library isn't throwing an exception rather than throwing up its virtual hands saying it doesn't know how to handle a request YOUR CODE should have known better than to make?
In other words, if you feel the need to add a layer to your code between your app and the platform you configured to support it, in order to validate your own requests, then you're free to do so. Otherwise, I'm guessing the embedded library designer expects you to know the limits you configured into it. (Or there may be some he did not anticipate.)
I've done a lot of embedded programming in my past, and if I was using a configurable kernal that I configured to only support 10 tasks (because I knew my application didn't need more than that), then in my client code I'd do a check to ensure that said limit wasn't reached BEFORE creating a new task and expecting to get an exception back if I was too stupid to follow my own design guidelines. In embedded situations, "exceptions" can lead to auto-shutdown or even self-destruct sequences. There is no effective way to handle "exceptions" in real-time situations, and often even just embedded situations.
Imagine pushing on a brake pedal in a car and an exception is thrown -- the user is saying "slow down" and the library says, "Sorry, I can't do that right now". It's shades of "2001 A Space Odyssy" with HAL telling Dave "No". That's what "exceptions" in embedded systems can result in.
What's different between that and your app saying, "Connect to this URL via this port" and the underlying library saying, "Sorry I can't do that right now". What is your exception handling code going to do? You need to detect that stuff FIRST rather than wait for an exception to be thrown by the embedded library.
Finally, this is more of a rant than anything else ... but one thing I've found sorely lacking in the vast majority of open-source projects on github I've looked at is any clear explanation of one or more USE-CASE MODELS. In this situation, WHY did the author choose to design this embedded DLL so it has situations where it simply throws up its hands by displaying an error message in a low-level MessageBox rather than providing some mechanism for handling the issue more elegantly? Obviously, he had SOMETHING in mind when he did that. Unfortunately for us, he did not explain it anywhere. Here we are trying to figure out how to deal with something that, on its face, makes no sense -- a DLL throwing up its hands when faced with an exceptional situation.
So my answer is, in the "spirit" of open-source code repos, you're free to dig into the code and try to figure out the USE-CASE MODEL that the author had in mind when he created this thing, and adapt your code to fit that model. Maybe start by looking at examples of code he posted that use this library, although if they're your typical trival examples people often post, then they won't offer a lot of insight into the overall USE-CASE MODEL that is leading to your situation.
Said another way, assume the library is behaving "as designed". Then WHAT IS THIS LIBRARY EXPECTING YOU TO DO IN YOUR CODE IN ORDER TO AVOID THAT SITUATION FROM ARISING?
Hopefully someone will see this who has a much shorter definitive answer to that.

Related

Things you look for when trying to understand new software

I wonder what sort of things you look for when you start working on an existing, but new to you, system? Let's say that the system is quite big (whatever it means to you).
Some of the things that were identified are:
Where is a particular subroutine or procedure invoked?
What are the arguments, results and predicates of a particular function?
How does the flow of control reach a particular location?
Where is a particular variable set, used or queried?
Where is a particular variable declared?
Where is a particular data object accessed, i.e. created, read, updated or deleted?
What are the inputs and outputs of a particular module?
But if you look for something more specific or any of the above questions is particularly important to you please share it with us :)
I'm particularly interested in something that could be extracted in dynamic analysis/execution.
I like to use a "use case" approach:
First, I ask myself "what's this software's purpose?": I try to identify how users are going to interact with the application;
Once I have some "use case", I try to understand what are the objects that are more involved and how they interact with other objects.
Once I did this, I draw a UML-type diagram that describe what I've just learned for further reference. What happens after depends on the task I've been assigned, i.e. modify the code, document the code etc.
There is the question of what motivation do I have for learning the new system:
Bug fix/minor enhancement - In this case, I may focus solely on that portion of the system that performs a specific function that needs to be altered. This is a way to break down a huge system but also is a way to identify if the issue is something I can fix or if it is something that I have to hand to the off-the-shelf company whose software we are using,e.g. a CRM, CMS, or ERP system can be a customized off-the-shelf system so there are many pieces to it.
Project work - This would be the other case and is where I'd probably try to build myself a view from 30,000 feet or so to know what are the high-level components and which areas of the system does the project impact. An example of this is where I'd join a company and work off of an existing code base but I don't have the luxury of having the small focus like in the previous case. Part of that view is to look for any patterns in the code in terms of naming conventions, project structure, etc. as this may be useful once I start changing some code in the system. I'd probably do some tracing through the system and try to see where are the uglier parts of the code. By uglier I mean those parts that are kludge-like and may have some spaghetti code as this was rushed when first written and is now being reworked heavily.
To my mind another way to view this is the question of whether I'm going to be spending days or weeks wrapping my head around a system like in the second case or should this be a case where it hopefully takes only a few hours, optimistically that is, to get my footing to make the necessary changes.

What is a concise way of understanding RESTful and its implications?

**update: horray! so it is a journey of practice and understanding. ;) now i no longer feel so dumb.*
I have read up many articles on REST, and coded up several rails apps that makes use of RESTful resources. However, I never really felt like I fully understood what it is, and what is the difference between RESTful and not-restful. I also have a hard time explaining to people why/when they should use it.
If there is someone who have found a very clear explanation for REST and circumstances on when/why/where to use it, (and when not to) it would benefit the world if you could put it up, thanks! =)
REST is usually learned like this:
You hear about REST being using HTTP the way it was meant to be used, and from that you shun SOAP Web Services' envelopes, since most of what's needed by many SOAP standards are handled by HTTP in a simple, no-nonsense way. You also quickly learn that you need to use the right method for the right operation.
Later, perhaps years later, you hear that REST is more than that. REST is in fact also the concept of linking between resources. This often takes a while to grasp the full meaning of, but when you learn this, you start introducing hyperlinks into your responses so that clients can navigate your system without being coupled to how the server wants to name its resources (i.e. the URIs).
Even later, you learn that you still haven't understood REST! And this is because you find out that media types are important. You start making media types called application/vnd.example.foo+json and put hyperlinks in them, since that's already your understanding of REST.
Years pass, and you re-read Fielding's thesis for the umpteenth time, to see if there's anything you missed, and it suddenly dawns upon you what really the HATEOAS constraint is: It's about the client not having any notion of how the server's resources are structured, but that it discoveres these relationships at runtime. It also means that the screen in front of the user is driven completely by what is passed over the wire, so in fact, if a server passes an image/jpeg then that's what you're supposed to show to the user, not an error message saying "AtomProcessor can't handle image/jpeg".
I'm just coming to terms with #4 and I'm hoping the ladder isn't much longer! It's taken me seven years.
This article does a good job classifying the differences in several http application styles from WS-* to RESTian purity. What I like about this post is it reminds you that most of what we call REST really is something only partly in line with Roy Fielding's original definition.
InfoQ has a whole section addressing more of the "what is REST" angle as well.
In terms of REST vs. SOAP, this question seems to have a number of good responses, particularly the selected answer.
I would imagine YMMV, but I found it very easy to start understanding the details of REST after I realised how REST essentially was a continuation of the static WWW concepts into the web application design space. I had written (a rather longish) post on the same : Why REST?
Scalability is an obvious benefit of REST (stateless, caching).
But also - and this is probably the main benefit of hypertext - REST is ideal for when you have lots of clients to your service. Following REST and the hypertext constraint drastically reduces the coupling between all those clients and your server, which means you have more freedom when evolving/developing your service over time - you are not tied down by the risk of breaking would-be-coupled clients.
On a practical note, if you're working with rails - then restfulie is a great little framework for tackling hypertext on the client and server. Server side is a rails extension, and client is a DSL for handling state changes. Interesting stuff, check it out here: http://restfulie.caelum.com.br/ - I highly recommend the tutorial/demo vids they have up on vimeo :)
Content-Type: text/x-flamebait
I've been asking the same question lately, and my supposition is that
half the problem with explaining why full-on REST is a good thing when
defining an interface for machine-consumed data is that much of the
time it isn't. OK, you'd need a really good reason to ignore the
commonsense bits (URLs define resources, HTTP verbs define actions,
etc etc) - I'm in no way suggesting we go back to the abomination that
was SOAP. But doing HATEOAS in a way that is both Fielding-approved
(no non-standard media types) and machine-friendly seems to offer
diminishing returns: it's all very well using a standard media type to
describe the valid transitions (if such a media type exists) but where
the application is at all complicated your consumer's agent still
needs to know which are the right transitions to make to achieve the
desired goal (a ticket purchase, or whatever), and it can't do that
unless your consumer (a human) tells it. And if he's required to
build into his program the out-of-band knowledge that the path with
linkrels create_order => add_line => add_payment_info => confirm is
the correct one, and reset_order is not the right path, then I don't
see that it's so much more grievous a sin to make him teach his XML
parser what to do with application/x-vnd.yourname.order.
I mean, obviously yes it's less work all round if there's a suitable
standard format with libraries and whatnot that can be reused, but in
the (probably more common) case that there isn't, your options
according to Fielding-REST are (a) create a standard, or (b) to
augment the client by downloading code to it. If you're merely
looking to get the job done and not to change the world, option (c)
"just make something up" probably looks quite tempting and I for one wouldn't
blame you for taking it.

How to hunt a Heisenbug

Recently, we received a bug report from one of our users: something on the screen was displayed incorrectly in our software. Somehow, we could not reproduce this in our development environment (Delphi 2007).
After some further study, it appears that this bug only manifests itself when "Code optimization" is turned on.
Are there any people here with experience in hunting down such a Heisenbug? Any specific constructs or coding bugs that commonly cause such an issue in Delphi software? Any places you would start looking?
I'll also just start debugging the whole thing in the usual way, but any tips specific to Optimization-related bugs (*) would be more than welcome!
(*) Note: I don't mean to say that the bug is caused by the optimizer; I think it's much more likely some wonky construct in the code is somehow pushed "over the edge" by the optimizer.
Update
It seems the bug boils down to a record being fully initialized with zeros when there's no code optimization, and the same record containing some random data when there is optimization. In this case, the random data seems to cause an enum type to contain invalid data (to my great surprise!).
Solution
The solution turned out to involve an unitialized local record variable somewhere deep in the code. Apparently, without optimization the record was reset (heap?), and with optimization turned on, the record was filled with the usual garbage. Thanks to you all for your contributions --- I learned a lot along the way!
Typically bugs of this form are caused by invalid memory access (reading uninitialised data, reading off the end of a buffer...) or thread race conditions.
The former will be affected by optimisations causing data layout to be rearranged in memory, and/or possibly by debug code that initialises newly allocated memory to some value; causing the incorrect code to "accidentally work".
The latter will be affected due to timings changing between optimisation levels. The former is generally much more likely.
If you have some automated way of making freshly allocated memory be filled with some constant value before it is passed to the program, and this makes the crash go away or become reproducible in the debug build, that'll provide a good point to start chasing things.
Could very well be a memory vs register issue: you programm running fine relying on memory persistence after a free.
I would recommend running your application with FastMM4 in full debug mode to be sure of your memory management.
Another (not free) tool which can be very useful in a case like this is Eurekalog.
Another thing that I've seen: a crash with the FPU registers being botched when calling some outside code (DLL, COM...) while with the debugger everything was OK.
A record that contains different data according to different compiler settings tells me one thing: That the record is not being explicitly initialised.
You may find that the setting of the compiler optimization flag is only one factor that might affect the content of that record - with any uninitialised data structures the one thing that you can rely on is that you can't rely on the initial content of the structure.
In simple terms:
class member data is initialised (to zero's) for new instances of the class
local variables (in functions and procedures) and unit variables are NOT initialised except in a few specific cases: interface references, dynamic arrays and strings and I think (but would need to check) records if they contain one or more fields of those types that would be initialised (strings, interface references etc).
The question as stated is now a little misleading because it seems you found your "Heisenberg" fairly easily enough. Now the issue is how to deal with it, and the answer is simply to explicitly initialise your record so that you aren't reliant on whatever behaviour or side-effect of the compiler is sometimes taking care of that for you and sometimes not.
Especially in purely native languages, like Delphi, you should be more than careful not to abuse the freedom to be able to cast anything to anything.
IOW: One thing, I have seen is that someone copies the definition of a class (e.g. from the implementation section in RTL or VCL) into his own code and then cast instances of the original class to his copy.
Now, after upgrading the library where the original class came from, you might experience all kinds of weird stuff. Like jumping into the wrong methods or bufferoverflows.
There's also the habit of using signed integer as pointers and vice-versa. (Instead of cardinal)
this works perfectly fine as long as your process has only 2GB of address space. But boot with the /3GB switch and you will see a lot of apps that start acting crazy. Those made the assumption of "pointer=signed integer" at least somewhere.
Your customer uses a 64Bit Windows? Chances are, he might have a larger address space for 32Bit apps. Pretty tough to debug w/o having such a test system available.
Then, there's race conditions.
Like having 2 threads, where one is very, very slow. So that you instinctively assume it will always be the last one and so there's no code that handles the scenario where "Captn slow" finishes first.
Changes in the underlying technologies can make these assumptions very wrong, very fast indeed.
Take a look at the upcoming breed of Flash-based super-mega-fast server storage.
Systems that can read and write Gigabytes per second. Applications that assume the IO stuff to be significantly slower than some calculations on in-memory values will easily fail on this kind of fast storage.
I could go on and on, but I gotta run right now...
Cheers
Code optimization does not mean necessarily that debug symbols have to be left out. Do a debug build with code optimization, then you can still debug the program and maybe the error occurs now.
One easy thing to do is Turn on compiler warning and hint, rebuild project and then fix all warnings/hints
Cheers
If it Delphi businesscode, with dataaware components etc, the follow might not apply.
I'm however writing machine vision code which is a bit computational. Most of the unittests are console based. I also am involved with FPC, and over the years have tested a lot with FPC. Partially out of hobby, partially in desperate situations where I wanted any hunch.
Some standard tricks that I tried (decreasing usefulness)
use -gv and valgrind the code (practically this means applications are required to run on Linux/FreeBSD. But for computational code and unittests that can be doable)
compile using fpc param -gt (=trash local vars, randomize local vars on procedure init)
modify heapmanager to randomize data of blocks it puts out (also applyable to Delphi code)
Try FPC's range/overflow checking and compiler hints.
run on a Mac Mini (powerpc) or win64. Due to totally different rules and memory layouts it can catch pretty funky things.
The 2 and 3 together nearly allow you to find most, if not all initialization problems.
Try to find any clues, and then go back to Delphi and search more focussed, debug etc.
I do realize this is not easy. I have a lot of FPC experience, and didn't have to find everything out from scratch for these cases. Still it might be worth a try, and might be a motivation to start setting up non-visual systems and unittests FPC compatible and platform independant. Most of this work will be needed anyway, seeing the Delphi roadmap.
In such problems i always advice to use logfiles.
Question: Can you somehow determine the incorrect display in the sourcecode?
If not, my answer wont help you.
If yes, check for the incorrectness, and as soon as you find it, dump the stack to a logfile. (see post mortem debugging for details about dumping and resymbolizing the stack).
If you see that some data has been corrupted, but you dont know how and then this happend, extract a function that does such a test for validity (with logging if failed), and call this function from more and more places over program execution (i.e. after each menu call). If you reiterate such a approach a few times you have good chances to find the problem.
Is this a local variable inside a procedure or function?
If so, then it lives on the stack, and will contain garbage. Depending on the execution path and compiler settings the garbage will change, potentially pushing your logic 'over the edge'.
--jeroen
Given your description of the problem I think you had uninitialized data that you got away with without the optimizer but which blew up with the optimization on.

Dealing with illogical managers [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
At a place I used to work they typical response to any problem was to blame the hardware or the users for not using the system perfectly. I had adopted the philosophy that it's my fault until I can prove otherwise prior to that job (and so far, at least 99 times out of 100 it's correct).
One of the last "unsolvable" problems when I was there was an abundance of database timeouts. After months of research, I still only had theories but couldn't prove any of them. One of my developers adamantly suggested replacing the network (every router, switch, access point) but couldn't provide any evidence that the network was the cause; it was, however, "obviously the cause" according to my manager (no development/IT experience) so he took over the problem. One caveat and Fog Creek plug: He couldn't account for the fact that the error reporting via FogBugz worked perfectly and to the same SQL Server as the rest of the data.
A couple, timeout-free months later, my manager boasted that he had fixed the timeouts ("Look, no timeouts!"). I had to hold back from grabbing a rock and saying "Look, no tigers!" but I did ask how he knew they would have occurred to which I got no response. The timeouts did return (and in greater numbers) a couple months later.
I'm pretty content with how I handled the situation but I'm curious how the SO crowd would have responded to letting a superior/colleague implement a solution you know (or are very sure) is wrong and will likely waste thousands of dollars?
Let them, but at the same time continue searching for the real cause.
A couple thousand dollars is money well spent if it keeps me from going against that kind of thinking (which is futile).
Well, if the problem is upper management, then I would do as you have done - lodge your complaint, then follow instructions. If it turns out they were right (it happens every now and then) then you look like a good employee despite your misgivings. If it turns out you were right, then they might be more willing to listen to you given that you allowed them their turn.
This is, of course, optimistic.
In the case of a colleague, take the problem up a level and consult your superiors for advice on how to approach the subject. Be fair to both your perspective and that of your colleague's, then follow the advice you're given.
Sometimes it's best to leave a manager be. If you think about his pressures and responsibilities he had to be seen to be doing something, rather than "nothing". After enough time "investigating" resolves into nothing to outside parties that need the timeouts to stop.
By taking an action he creates an opportunity to keep researching. The trick is to find a way to put your solutions in his context. Here is something we can do now, and here's what we can continue to do. For example, "We can replace the networking gear as a precautionary step, and then look at the version control logs to rule out that possibility."
This gives him something proactive so he can look productive up his chain while achieving the solution you want which will ultimately be successful.
In the long term you should look to work for someone who trusts your technical decisions implicitly, you can talk candidly with and who well help you help him navigate the politics in a way you both know what's going on. If you manager isn't that person, move.
Is this a big problem? Its not your job to save your company dollars other than that you would like your company to remain solvent so you get paid.
If its just one manager, he will be weeded out sooner or later, if your entire company culture is like this, maybe it would be time to move on.
In the mean time, see if you can see this from your manager's perspective.
I'd consider you're manager's intent to be a good thing. It's the people that don't want to bother that I find more difficult. It's just best to find a way to use that energy to be helpful.
One common problem for lots of people (occasionally myself), is that they flail around when trying to diagnose a problem. If you're wildly guessing at it, then particularity with modern computers, you have only the slimmest possibility of being right. Approaching this type of problem with that type of attitude, generally means that you'll never fix it.
The best way of handing complex debugging, is by divide and conquer. In this case, first think up a test, them implement it. Did that test act as expected? Depending on where you are with your tests, you are getting closer or farther away from the problem. The key, is that ALL of the tests must result in some concrete (objective) behavior. If the results are ambiguous, then the test is useless.
If you're getting a disconnect in part of the system, but some other part is not, then you have a huge amount of valuable information (it also shows it's not the network). What's the difference between the parts? Just start descending until you get somewhere ...
Getting back to your manager. Whenever I encounter that type of personality problem, I try to redirect the energy into something more useful. The desire is there, it just needs some help in getting shaped. If you can convince your manager to make sure the tests are concrete, then if they do enough of them, they'll produce enough information to correctly guess the bug. Sure, a more consistent approach might be faster, but why turn down some free assistance. I generally feel that there is some useful role for anyone on any project, it's all about making it possible to harness their efforts ....
Paul.

What is the difference between a bug and a change request in MSF for CMMI?

I'm currently evaluating the MSF for CMMI process template under TFS for use on my development team, and I'm having trouble understanding the need for separate bug and change request work item types.
I understand that it is beneficial to be able to differentiate between bugs (errors) and change requests (changing requirements) when generating reports.
In our current system, however, we only have a single type of change request and just use a field to indicate whether it is a bug, requirement change, etc (this field can be used to build report queries).
What are the benefits of having a separate workflow for bugs?
I'm also confused by the fact that developers can submit work against a bug or a change request, I thought the intended workflow was for bugs to generate change requests which are what the developer references when making changes.
#Luke
I don't disagree with you, but this difference is typically the explanation given for why there is two different processes available for handling the two types of issues.
I'd say that if the color of the home page was originally designed to be red, and for some reason it is blue, that's easily a quick fix and doesn't need to involve many people or man-hours to do the change. Just check out the file, change the color, check it back in and update the bug.
However, if the color of the home page was designed to be red, and is red, but someone thinks it needs to be blue, that is, to me anyway, a different type of change. For instance, have someone thought about the impact this might have on other parts of the page, like images and logos overlaying the blue background? Could there be borders of things that looks bad? Link underlining is blue, will that show up?
As an example, I am red/green color blind, changing the color of something is, for me, not something I take lightly. There are enough webpages on the web that gives me problems. Just to make a point that even the most trivial change can be nontrivial if you consider everything.
The actual end implementation change is probably much of the same, but to me a change request is a different beast, precisely because it needs to be thought about more to make sure it will work as expected.
A bug, however, is that someone said this is how we're going to do it and then someone did it differently.
A change request is more like but we need to consider this other thing as well... hmm....
There are exceptions of course, but let me take your examples apart.
If the server was designed to handle more than 300,000,000,000 pageviews, then yes, it is a bug that it doesn't. But designing a server to handle that many pageviews is more than just saying our server should handle 300,000,000,000 pageviews, it should contain a very detailed specification for how it can do that, right down to processing time guarantees and disk access average times. If the code is then implemented exactly as designed, and unable to perform as expected, then the question becomes: did we design it incorrectly or did we implement it incorrectly?.
I agree that in this case, wether it is to be considered a design flaw or a implementation flaw depends on the actual reason for why it fails to live up to expectations. For instance, if someone assumed disks were 100x times as fast as they actually are, and this is deemed to be the reason for why the server fails to perform as expected, I'd say this is a design bug, and someone needs to redesign. If the original requirement of that many pageviews is still to be held, a major redesign with more in-memory data and similar might have to be undertaken.
However, if someone has just failed to take into account how raid disks operate and how to correctly benefit from striped media, that's a bug and might not need that big of a change to fix.
Again, there will of course be exceptions.
In any case, the original difference I stated is the one I have found to be true in most cases.
Keep in mind that a part of a Work Item Type definition for TFS is the definition of it's "Workflow" meaning the states the work item can be and the transitions between the states. This can be secured by security role.
So - generally speaking - a "Change Request" would be initiated and approved by someone relatively high up in an organization (someone with "Sponsorship" rights related to spending the resources to make a (possibly very large) change to the system. Ultimately this person would be the one to approve that the change was made successfully.
For a "Bug" however, ANY user of the application should be able to initiate a Bug.
At an organization I implemented TFS at, only Department Heads can be the originators of a "Change Request" - but "Bugs" were created from "Help Desk" tickets (not automated, just through process...)
Generally, though I can't speak for CMM, change requests and bugs are handled and considered differently because they typically refer to different pieces of your application lifecycle.
A bug is a defect in your program implementation. For instance, if you design your program to be able to add two numbers and give the user the sum, a defect would be that it does not handle negative numbers correctly, and thus a bug.
A change request is when you have a design defect. For instance, you might have specifically said that your program should not handle negative numbers. A change request is then filed in order to redesign and thus reimplement that part. The design defect might not be intentional, but could easily be because you just didn't consider that part when you originally designed your program, or new cases that didn't exist at the time when the original design was created have been invented or discovered since.
In other words, a program might operate exactly as designed, but need to be changed. This is a change request.
Typically, fixing a bug is considered a much cheaper action than executing a change request, as the bug was never intended to be part of your program. The design, however, was.
And thus a different workflow might be necessary to handle the two different scenarios. For instance, you might have a different way of confirming and filing bugs than you have for change requests, which might require more work to lay out the consequences of the change.
A bug is something that is broken in a requirement which has already been approved for implementation.
A change request needs to go through a cycle in which the impact and effort has to be estimated for that change, and then it has to be approved for implementation before work on it can begin.
The two are fundamentally different under CMM.
Is my assumption incorrect then that change requests should be generated from bugs? I'm confused because I don't think all bugs should be automatically approved for implementation -- they may be trivial and at least in our case will go through the same review process as a change request before being assigned to a developer.
Implementation always comes from requirement. It may be from product manager, it may be from some of you random thought. It may be documented, it may be from some conversation. In the end of the day, even something as simple as a := a + 1, the "real" implementation would be based on compiler, linker, CPU, etc. which depends on the physical law of real life.
A bug is something that is implemented against the ORIGINAL requirement. Other than that, it is a change request.
If the requirement is changed and the implementation need to be changed as well, it's a change request.
If the dependency has been changed, for example web browser stopped supporting some tags and you need to make some change, it's a change request.
In real word, anything that is not properly documented should be treated as change request. Product manager forgot to put something in the story? Sorry, that's a change request.
All change requests should be properly estimated and pointed. Developers get paid for making change requests, not for making bugs and fixing those made by them.

Resources