Does adding environment variable affect app speed on iOS? - ios

So the question is pretty simple - does adding environmental variables affect application speed? In particular, I want to add DYLD_PRINT_STATISTICS variable to a release commit, but I'm afraid it might affect app start speed. Any links or info on the topic would be appreciated. Thanks in advance.

As usual the answer is "it depends". In principle adding an environment variable to the environment of an app does not affect speed in a noticeable way.
However if you decide to set a specific variable there is a high probability that you want your application to interpret the contents of the variable in a specific way. This influence is entirely defined by the application and there is very little you can say "in general".
In your case DYLD_PRINT_STATISTICS seems to only print "launch statistics", whatever that may be. Given it can be found on an Apple Webpage describing iOS Debugging Magic it does not appear to be wise to put it into a shipping build. The statistics would be "printed" (i.e. probably logged to some remote location) on your customer device. This may be useful during development where it might get printed to Xcode, but not on customer devices where you will never see the results.
If this should be a "magic" measure to make an application "run" in some better way I do not consider it a good solution and you should dig deeper to find the actual problem.

Related

Find unused code in a Rails app

How do I find what code is and isn't being run in production ?
The app is well-tested, but there's a lot of tests that test unused code. Hence they get coverage when running tests... I'd like to refactor and clean up this mess, it keeps wasting my time.
I have a lot of background jobs, this is why I'd like the production env to guide me. Running at heroku I can spin up dynos to compensate any performance impacts from the profiler.
Related question How can I find unused methods in a Ruby app? not helpful.
Bonus: metrics to show how often a line of code is run. Don't know why I want it, but I do! :)
Under normal circumstances the approach would be to use your test data for code coverage, but as you say you have parts of your code that are tested but are not used on the production app, you could do something slightly different.
Just for clarity first: Don't trust automatic tools. They will only show you results for things you actively test, nothing more.
With the disclaimer behind us, I propose you use a code coverage tool (like rcov or simplecov for Ruby 1.9) on your production app and measure the code paths that are actually used by your users. While these tools were originally designed for measuring test coverage, you could also use them for production coverage
Under the assumption that during the test time-frame all relevant code paths are visited, you can remove the rest. Unfortunately, this assumption will most probably not fully hold. So you will still have to apply your knowledge of the app and its inner workings when removing parts. This is even more important when removing declarative parts (like model references) as those are often not directly run but only used for configuring other parts of the system.
Another approach which could be combined with the above is to try to refactor your app into distinguished features that you can turn on and off. Then you can turn features that are suspected to be unused off and check if nobody complains :)
And as a final note: you won't find a magic tool to do your full analysis. That's because no tool can know whether a certain piece of code is used by actual users or not. The only thing that tools can do is create (more or less) static reachability graphs, telling you if your code is somehow called from a certain point. With a dynamic language like Ruby even this is rather hard to achieve, as static analysis doesn't bring much insight in the face of meta-programming or dynamic calls that are heavily used in a rails context. So some tools actually run your code or try to get insight from test coverage. But there is definitely no magic spell.
So given the high internal (mostly hidden) complexity of a rails application, you will not get around to do most of the analysis by hand. The best advice would probably be to try to modularize your app and turn off certain modules to test f they are not used. This can be supported by proper integration tests.
Checkout the coverband gem, it does what you exactly what are you searching.
Maybe you can try to use rails_best_practices to check unused methods and class.
Here it is in the github: https://github.com/railsbp/rails_best_practices .
Put 'gem "rails_best_practices" ' in your Gemfile and then run rails_best_practices . to generate configuration file
I had the same problem and after exploring some alternatives I realized that I have all the info available out of the box - log files. Our log format is as follows
Dec 18 03:10:41 ip-xx-xx-xx-xx appname-p[7776]: Processing by MyController#show as HTML
So I created a simple script to parse this info
zfgrep Processing production.log*.gz |awk '{print $8}' > ~/tmp/action
sort ~/tmp/action | uniq -c |sort -g -r > ~/tmp/histogram
Which produced results of how often an given controller#action was accessed.
4394886 MyController#index
3237203 MyController#show
1644765 MyController#edit
Next step is to compare it to the list of all controller#action pair in the app (using rake routes output or can do the same script for testing suite)
You got already the idea to mark suspicious methods as private (what will maybe break your application).
A small variation I did in the past: Add a small piece code to all suspicious methods to log it. In my case it was a user popup "You called a obsolete function - if you really need please contact the IT".
After one year we had a good overview what was really used (it was a business application and there where functions needed only once a year).
In your case you should only log the usage. Everything what is not logged after a reasonable period is unused.
I'm not very familiar with Ruby and RoR, but what I'd suggest some crazy guess:
add :after_filter method wich logs name of previous called method(grab it from call stack) to file
deploy this to production
wait for a while
remove all methods that are not in log.
p.s. probably solution with Alt+F7 in NetBeans or RubyMine is much better :)
Metaprogramming
Object#method_missing
override Object#method_missing. Inside, log the calling Class and method, asynchronously, to a data store. Then manually call the original method with the proper arguments, based on the arguments passed to method_missing.
Object tree
Then compare the data in the data store to the contents of the application's object tree.
disclaimer: This will surely require significant performance and resource consideration. Also, it will take a little tinkering to get that to work, but theoretically it should work. I'll leave it as an exercise to the original poster to implement it. ;)
Have you tried creating a test suite using something like sahi you could then record all your user journies using this and tie those tests to rcov or something similar.
You do have to ensure you have all user journies but after that you can look at what rcov spits out and at least start to prune out stuff that is obviously never covered.
This isn't a very proactive approach, but I've often used results gathered from New Relic to see if something I suspected as being unused had been called in production anytime in the past month or so. The apps I've used this on have been pretty small though, and its decently expensive for larger applications.
I've never used it myself, but this post about the laser gem seems to talk about solving your exact problem.
mark suspicious methods as private. If that does not break the code, check if the methods are used inside the class. then you can delete things
It is not the perfect solution, but for example in NetBeans you can find usages of the methods by right click on them (or press Alt+F7).
So if method is unused, you will see it.

How to hunt a Heisenbug

Recently, we received a bug report from one of our users: something on the screen was displayed incorrectly in our software. Somehow, we could not reproduce this in our development environment (Delphi 2007).
After some further study, it appears that this bug only manifests itself when "Code optimization" is turned on.
Are there any people here with experience in hunting down such a Heisenbug? Any specific constructs or coding bugs that commonly cause such an issue in Delphi software? Any places you would start looking?
I'll also just start debugging the whole thing in the usual way, but any tips specific to Optimization-related bugs (*) would be more than welcome!
(*) Note: I don't mean to say that the bug is caused by the optimizer; I think it's much more likely some wonky construct in the code is somehow pushed "over the edge" by the optimizer.
Update
It seems the bug boils down to a record being fully initialized with zeros when there's no code optimization, and the same record containing some random data when there is optimization. In this case, the random data seems to cause an enum type to contain invalid data (to my great surprise!).
Solution
The solution turned out to involve an unitialized local record variable somewhere deep in the code. Apparently, without optimization the record was reset (heap?), and with optimization turned on, the record was filled with the usual garbage. Thanks to you all for your contributions --- I learned a lot along the way!
Typically bugs of this form are caused by invalid memory access (reading uninitialised data, reading off the end of a buffer...) or thread race conditions.
The former will be affected by optimisations causing data layout to be rearranged in memory, and/or possibly by debug code that initialises newly allocated memory to some value; causing the incorrect code to "accidentally work".
The latter will be affected due to timings changing between optimisation levels. The former is generally much more likely.
If you have some automated way of making freshly allocated memory be filled with some constant value before it is passed to the program, and this makes the crash go away or become reproducible in the debug build, that'll provide a good point to start chasing things.
Could very well be a memory vs register issue: you programm running fine relying on memory persistence after a free.
I would recommend running your application with FastMM4 in full debug mode to be sure of your memory management.
Another (not free) tool which can be very useful in a case like this is Eurekalog.
Another thing that I've seen: a crash with the FPU registers being botched when calling some outside code (DLL, COM...) while with the debugger everything was OK.
A record that contains different data according to different compiler settings tells me one thing: That the record is not being explicitly initialised.
You may find that the setting of the compiler optimization flag is only one factor that might affect the content of that record - with any uninitialised data structures the one thing that you can rely on is that you can't rely on the initial content of the structure.
In simple terms:
class member data is initialised (to zero's) for new instances of the class
local variables (in functions and procedures) and unit variables are NOT initialised except in a few specific cases: interface references, dynamic arrays and strings and I think (but would need to check) records if they contain one or more fields of those types that would be initialised (strings, interface references etc).
The question as stated is now a little misleading because it seems you found your "Heisenberg" fairly easily enough. Now the issue is how to deal with it, and the answer is simply to explicitly initialise your record so that you aren't reliant on whatever behaviour or side-effect of the compiler is sometimes taking care of that for you and sometimes not.
Especially in purely native languages, like Delphi, you should be more than careful not to abuse the freedom to be able to cast anything to anything.
IOW: One thing, I have seen is that someone copies the definition of a class (e.g. from the implementation section in RTL or VCL) into his own code and then cast instances of the original class to his copy.
Now, after upgrading the library where the original class came from, you might experience all kinds of weird stuff. Like jumping into the wrong methods or bufferoverflows.
There's also the habit of using signed integer as pointers and vice-versa. (Instead of cardinal)
this works perfectly fine as long as your process has only 2GB of address space. But boot with the /3GB switch and you will see a lot of apps that start acting crazy. Those made the assumption of "pointer=signed integer" at least somewhere.
Your customer uses a 64Bit Windows? Chances are, he might have a larger address space for 32Bit apps. Pretty tough to debug w/o having such a test system available.
Then, there's race conditions.
Like having 2 threads, where one is very, very slow. So that you instinctively assume it will always be the last one and so there's no code that handles the scenario where "Captn slow" finishes first.
Changes in the underlying technologies can make these assumptions very wrong, very fast indeed.
Take a look at the upcoming breed of Flash-based super-mega-fast server storage.
Systems that can read and write Gigabytes per second. Applications that assume the IO stuff to be significantly slower than some calculations on in-memory values will easily fail on this kind of fast storage.
I could go on and on, but I gotta run right now...
Cheers
Code optimization does not mean necessarily that debug symbols have to be left out. Do a debug build with code optimization, then you can still debug the program and maybe the error occurs now.
One easy thing to do is Turn on compiler warning and hint, rebuild project and then fix all warnings/hints
Cheers
If it Delphi businesscode, with dataaware components etc, the follow might not apply.
I'm however writing machine vision code which is a bit computational. Most of the unittests are console based. I also am involved with FPC, and over the years have tested a lot with FPC. Partially out of hobby, partially in desperate situations where I wanted any hunch.
Some standard tricks that I tried (decreasing usefulness)
use -gv and valgrind the code (practically this means applications are required to run on Linux/FreeBSD. But for computational code and unittests that can be doable)
compile using fpc param -gt (=trash local vars, randomize local vars on procedure init)
modify heapmanager to randomize data of blocks it puts out (also applyable to Delphi code)
Try FPC's range/overflow checking and compiler hints.
run on a Mac Mini (powerpc) or win64. Due to totally different rules and memory layouts it can catch pretty funky things.
The 2 and 3 together nearly allow you to find most, if not all initialization problems.
Try to find any clues, and then go back to Delphi and search more focussed, debug etc.
I do realize this is not easy. I have a lot of FPC experience, and didn't have to find everything out from scratch for these cases. Still it might be worth a try, and might be a motivation to start setting up non-visual systems and unittests FPC compatible and platform independant. Most of this work will be needed anyway, seeing the Delphi roadmap.
In such problems i always advice to use logfiles.
Question: Can you somehow determine the incorrect display in the sourcecode?
If not, my answer wont help you.
If yes, check for the incorrectness, and as soon as you find it, dump the stack to a logfile. (see post mortem debugging for details about dumping and resymbolizing the stack).
If you see that some data has been corrupted, but you dont know how and then this happend, extract a function that does such a test for validity (with logging if failed), and call this function from more and more places over program execution (i.e. after each menu call). If you reiterate such a approach a few times you have good chances to find the problem.
Is this a local variable inside a procedure or function?
If so, then it lives on the stack, and will contain garbage. Depending on the execution path and compiler settings the garbage will change, potentially pushing your logic 'over the edge'.
--jeroen
Given your description of the problem I think you had uninitialized data that you got away with without the optimizer but which blew up with the optimization on.

What should I ask the previous development team during my only (1-3hr) meeting?

There is Ruby on Rails (1.8, 2.3.2) project. First version of project was made by some organisation. I will implement next versions of this project without any help from this organisation. I will be able to talk with developers from previous development team during meeting (1-3 hours).
Project statistics: ~10k LOC, 1.0/0.6 code to test ratio, rspec
What questions about project can you recommend to ask?
First review the entire project and to figure out as much as possible so you have context and can actually understand what they tell you.
Ask
If you can record the conversation
For an architectural overview
Why they made certain architectural decisions over another
A complete list of dependencies (if you can't figure that out on your own)
What the biggest problems are
Which parts of the projects are always / never being fixed
What the Achilles' Heel of the project is
What will cause the biggest headaches
What security issues are there and what the constraint is to fixing it
What would you do next if you were me?
What you should know that you didn't ask (most important question)
Also, don't be judgemental, you want them to reveal any problems they know about. There are probably tons of things wrong with the app that they are embarassed about, which you need to know sooner rather than later. They're not going to open up to you if they don't trust you.
I would ask for a code walkthrough. Not line-by-line, but more for the overall structure of the project, relationships between individual modules, etc.
Find out the Why's. How is easy enough to see in the codebase, but the why is sometimes impossible to figure out, and will bite you in the ass.
For instance...
Which parts of the application were the biggest performance issues? Which of those issues were resolved? Which are still issues?
Why did you opt for pattern / tool / library x? What other things did you consider? Why?
This will hopefully. (Hit some wood.) Help keep you from having to trudge through the same learning curve and mistakes that the first team had to deal with, and should give you good insight into where the first team actually made a poor choice, instead of making a choice based on factors you have not accounted for yet.
Ask if the new features will cause any major changes to the existing code (architecturally) and what the implication of that will be with other dependent parts of the application.
Also get their emails, as you will have more questions.
One of the most important things, in my opinion, is to get as much technical documentation as you can prior to meeting with them. You should try to go into the meeting as informed as possible, so that you not only know what areas you need to focus on the most, but also to have a preexisting knowledge of how some of the subsystems relate to each other.
Also, do not be afraid to ask what they would have done differently, if given the chance. Some of the best ideas come too late in the development process to be implemented - be it from library availability, change in requirements, change in team, etc.
Bring cookies (or pizza, beer, or wine as appropriate); you will want them to have positive memories of you for when you call with questions.
Edit: to put my answer in the form of a question: "May I offer you a home-baked cookie?"
Perhaps you have done this already, but I would make sure you can:
Checkout the latest version
Run all the migrations
Run all the tests
Deploy (even if to a staging server)
Run the application locally
Before you go to the meeting, so you can make sure you can by the time it is over.
Other things that might be useful
data model
UI wireframes
bug tracker data / issue tracker data
who are the customers / people representing customers
development environment configuration
source control locations, etc.
explanation of special configuration settings
Wow! All great answers, right down to the cookies.
My contribution assumes that this is your one and only chance to access the old dev team, therefore you need to kick it up a notch:
Agenda. Split the meeting into several parts, for instance:
A quick (15 min) introduction and arch overview
One on one with team members.
Design review as a group, etc.
Positive Energy. Especially if the relationship is inherently difficult, keep a positive focus by postulating: what improvements would you put into the next version - (rewrite is not an option, right Joel) - capture every nuance, and drill down past their comfort level only nearer the end.
Facilitator. Use a trained design meeting facilitator. They can help prep for the meeting, conduct pre-meeting interviews, design the agenda. During the meeting they can drive the intensity, and keep the focus. They can also suggest forms of capturing what can be a fair amount of information.
Also, I would try to id all design artifacts beyond the code, if any, and come to an understanding of how accurate it is. This may include doing design reviews of key elements of these documents vis a vis the as-built system.
Don

How to get a positive mental attitude towards testing?

I want to write tests for my app, though each time I look at rspec.info, I really don't see a definite path to take towards "doing things right" and testing first. I watched the peepcode videos on rspec more than once, yet it doesn't take. I want to take more pride in my work, and I think that testing will help. How can I break through this mental block?
Find tools that will reward you for testing. For example, make it very easy to run all the tests and get a message like
73 tests passed.
Try random testing because you can test against a lot of values quickly and easily.
See if your language provides a test-coverage analysis tool that gives you percentage of statement coverage or percentage of block coverage. It is very rewarding to drive code coverage from 60% up to 90%---and if you are lucky, you will find bugs.
My key advice is to quantify your progress in testing so that you can see the numbers going up. That will make it a lot more motivating. (Gee, I wonder what other numbers that go up can be found on this site...)
I was hating it until I started creating a few testing macros. Like logging in or getting to the homepage. I found it fun to start poking at what my testing framework could really do.
It also helped to have someone else get me started by writing a few. Right away I found obvious improvements which made me want to get in there and start improving things.
"Test things you don't want to break."
It might be helpful to prioritise at first. I know that typing out the full three layers of model, view, and controller specs on top of the cucumber acceptance tests can be a chore. So one idea is to just test the most critical things in your app, and add tests as you run into bugs you don't want to see again.
"Always start with a failing test."
Cucumber features plain text "stories" that are pretty awesome for getting some really concrete tests up & running. Maybe that would be one place where you could get started. Cucumber doesn't really work with an AJAX-based app though, for that you'd have to take Selenium or Watir instead. You can start with a failing story before writing a single line of code, and quickly proceed from there to make that story pass.
"Don't test, specify."
Instead of thinking of tests, try to make a mental switch: you're not testing but SPECIFYING how your application will behave. This is design work, not nearly as boring as testing. :)
Think of it like this: if you don't test, your code is broken.
You need to see the value that testing will bring in refactoring and extending your code. Once you have a set of tests that define the behavior of your classes, you can then feel free to start making changes to improve the code. Your tests will provide the confidence that what you're doing isn't breaking the system. When you go to add new functionality to your code, running your existing tests will give you confidence that the new code you've added doesn't break anything else.
The key is to take that long term view. Testing is an investment. It takes a little bit away from the code you could be writing but eventually it will start paying off with interest. The capital that you have stored up will make it much easier to move ahead more quickly when adding new features.
Assuming you already have a list of bugs to fix, I always like to go back through and where ever possible create an automated test that demonstrates the bug. Then fix the bug and watch the test pass. Since you have to test the bug anyway, and the bug should already give you enough information to recreate it, you can see an immediate return on your tests.
Eventually, you'll start to get a feel for putting the tests together and how to write them, and you won't need the "blueprint" of an existing bug.
I wrote a motivation post about just this case couple of days ago. Here is the summary:
Start writing tests whenever you have
an opportunity to do it (ie. whenever
you write some code). Choose any tool
that makes sense to you and write any
test that you feel could cover at
least some tiny behavior of your
application (don’t care about the
coverage or any other scary terms from
the day one). Don’t be afraid about
primitive tests and trivial assertions
- you’ll get more confidence as your test coverage will grow and you’ll
become more and more happier as you’ll
notice that you don’t need to hit F5
that often anymore. Think about
testing in other positive terms - the
better you are at it, the less time
you need to spend with activities you
don’t like (watching the spinning
refresh icon in the browser,
debugging) and more with things you
love.
And here is the whole thing, if you are interested.
As has been mentioned previously, the easiest way to break into testing is with regression testing.
I'd also avoid doing controller specs - they are a PITA. Do heavy model testing, because that's where the logic should be in the first place.
Try spec'ing / testing a plain ruby project before you go off into a rails project.
Well I'll tell you how!
FIRST DO THE FOLLOWING 10 TIMES MANUALLY ON DIFFERENT APPLICATIONS ,BEFORE YOU TRY TO AUTOMATE
the negative scenarios, where the result would come out negative.
it could be wrong data entered and gives you right outputs.
for example a login screen:
There could be many scenarios when correct User wrong PW,Wrong User correct PW.... the most important thing is YOU DONT GIVE UP UNLESS BREAK IT .this is your mantra.
HMMM NOW YOU ARE THINKING LIKE A TESTER NOW TURN TO UR SYSTEM,
JUS WRITE THE NEGATIVES TESTS AND THEIR RESULTS
AND THEM THE POSITVE TESTS
DESIGN IT.
NOW DEVELOP THE FRAMEWORK

What is the difference between a bug and a change request in MSF for CMMI?

I'm currently evaluating the MSF for CMMI process template under TFS for use on my development team, and I'm having trouble understanding the need for separate bug and change request work item types.
I understand that it is beneficial to be able to differentiate between bugs (errors) and change requests (changing requirements) when generating reports.
In our current system, however, we only have a single type of change request and just use a field to indicate whether it is a bug, requirement change, etc (this field can be used to build report queries).
What are the benefits of having a separate workflow for bugs?
I'm also confused by the fact that developers can submit work against a bug or a change request, I thought the intended workflow was for bugs to generate change requests which are what the developer references when making changes.
#Luke
I don't disagree with you, but this difference is typically the explanation given for why there is two different processes available for handling the two types of issues.
I'd say that if the color of the home page was originally designed to be red, and for some reason it is blue, that's easily a quick fix and doesn't need to involve many people or man-hours to do the change. Just check out the file, change the color, check it back in and update the bug.
However, if the color of the home page was designed to be red, and is red, but someone thinks it needs to be blue, that is, to me anyway, a different type of change. For instance, have someone thought about the impact this might have on other parts of the page, like images and logos overlaying the blue background? Could there be borders of things that looks bad? Link underlining is blue, will that show up?
As an example, I am red/green color blind, changing the color of something is, for me, not something I take lightly. There are enough webpages on the web that gives me problems. Just to make a point that even the most trivial change can be nontrivial if you consider everything.
The actual end implementation change is probably much of the same, but to me a change request is a different beast, precisely because it needs to be thought about more to make sure it will work as expected.
A bug, however, is that someone said this is how we're going to do it and then someone did it differently.
A change request is more like but we need to consider this other thing as well... hmm....
There are exceptions of course, but let me take your examples apart.
If the server was designed to handle more than 300,000,000,000 pageviews, then yes, it is a bug that it doesn't. But designing a server to handle that many pageviews is more than just saying our server should handle 300,000,000,000 pageviews, it should contain a very detailed specification for how it can do that, right down to processing time guarantees and disk access average times. If the code is then implemented exactly as designed, and unable to perform as expected, then the question becomes: did we design it incorrectly or did we implement it incorrectly?.
I agree that in this case, wether it is to be considered a design flaw or a implementation flaw depends on the actual reason for why it fails to live up to expectations. For instance, if someone assumed disks were 100x times as fast as they actually are, and this is deemed to be the reason for why the server fails to perform as expected, I'd say this is a design bug, and someone needs to redesign. If the original requirement of that many pageviews is still to be held, a major redesign with more in-memory data and similar might have to be undertaken.
However, if someone has just failed to take into account how raid disks operate and how to correctly benefit from striped media, that's a bug and might not need that big of a change to fix.
Again, there will of course be exceptions.
In any case, the original difference I stated is the one I have found to be true in most cases.
Keep in mind that a part of a Work Item Type definition for TFS is the definition of it's "Workflow" meaning the states the work item can be and the transitions between the states. This can be secured by security role.
So - generally speaking - a "Change Request" would be initiated and approved by someone relatively high up in an organization (someone with "Sponsorship" rights related to spending the resources to make a (possibly very large) change to the system. Ultimately this person would be the one to approve that the change was made successfully.
For a "Bug" however, ANY user of the application should be able to initiate a Bug.
At an organization I implemented TFS at, only Department Heads can be the originators of a "Change Request" - but "Bugs" were created from "Help Desk" tickets (not automated, just through process...)
Generally, though I can't speak for CMM, change requests and bugs are handled and considered differently because they typically refer to different pieces of your application lifecycle.
A bug is a defect in your program implementation. For instance, if you design your program to be able to add two numbers and give the user the sum, a defect would be that it does not handle negative numbers correctly, and thus a bug.
A change request is when you have a design defect. For instance, you might have specifically said that your program should not handle negative numbers. A change request is then filed in order to redesign and thus reimplement that part. The design defect might not be intentional, but could easily be because you just didn't consider that part when you originally designed your program, or new cases that didn't exist at the time when the original design was created have been invented or discovered since.
In other words, a program might operate exactly as designed, but need to be changed. This is a change request.
Typically, fixing a bug is considered a much cheaper action than executing a change request, as the bug was never intended to be part of your program. The design, however, was.
And thus a different workflow might be necessary to handle the two different scenarios. For instance, you might have a different way of confirming and filing bugs than you have for change requests, which might require more work to lay out the consequences of the change.
A bug is something that is broken in a requirement which has already been approved for implementation.
A change request needs to go through a cycle in which the impact and effort has to be estimated for that change, and then it has to be approved for implementation before work on it can begin.
The two are fundamentally different under CMM.
Is my assumption incorrect then that change requests should be generated from bugs? I'm confused because I don't think all bugs should be automatically approved for implementation -- they may be trivial and at least in our case will go through the same review process as a change request before being assigned to a developer.
Implementation always comes from requirement. It may be from product manager, it may be from some of you random thought. It may be documented, it may be from some conversation. In the end of the day, even something as simple as a := a + 1, the "real" implementation would be based on compiler, linker, CPU, etc. which depends on the physical law of real life.
A bug is something that is implemented against the ORIGINAL requirement. Other than that, it is a change request.
If the requirement is changed and the implementation need to be changed as well, it's a change request.
If the dependency has been changed, for example web browser stopped supporting some tags and you need to make some change, it's a change request.
In real word, anything that is not properly documented should be treated as change request. Product manager forgot to put something in the story? Sorry, that's a change request.
All change requests should be properly estimated and pointed. Developers get paid for making change requests, not for making bugs and fixing those made by them.

Resources