Related
I'm familiar with some mnemonic/memorization techniques for about a year.
I think that this techniques can give a developer significant benefit or even make you an expert in the field.
If you are familiar with this techniques, you know that there are mnemonic techniques for long-term memorizing. We often read lots of books, and there are many concepts which you don't remember because they won't appear often in your daily coding-life. So, you need to learn it again and again, months and years later.
The same situation with frameworks. It takes some time to become familiar with framework's syntax, useful code constructs and so on. But after some time you forget many concepts from your previous framework(or framework which you rarely use - but it is very important to you).
By using this techniques you can build with time your sustainable knowledge base, which will reliably grow - you can be confident that after some time you won't forget about the concepts you learned earlier.
Please tell me what do you think about this idea?
You are already familiar with Mnemonics techniques, please tell about your experience - it will be very useful and interesting to hear.
Useful links:
Method of loci
Mnemonic
My favorite method:
Type it into Google
I'm being totally serious - why do you need to remember it?
You don't memorize how to be a good programmer any more than you memorize how to be a good classical violinist. You practice, practice, practice. That will let you naturally recall the most important constructs, and as Chad says, Google is there for the less important ones. I have never felt the need to use mnemonic devices or rote memorization to learn a programming construct or technique.
"Expertise in the field" isn't about memorizing function calls. It's about the ability to break problems down, and provide performant, maintainable, reliable solutions in minimal time.
You could memorize every function call in the STL, and still be a complete neophyte programmer.
I read Harry Lorrayne's "The Memory Book" a few years ago, and found that the techniques therein were great for remembering related facts. However, in my experience I the techniques could have been more useful, namely:
The memorization didn't tend to work in the long run. If I wasn't practicing remembering a particular list, or body of facts, I would eventually completely forget them within a few days or weeks.
I had trouble applying the techniques to hierarchical data sets, like class libraries. This made their use less powerful for programming stuff.
The techniques were very useful for things that could be easily explained by voice, or a single stream of text. However, I had trouble applying them to things of a more visual nature, such as mathematical equations.
That said, I have used Mnumonic Techniques while coding for things that google could not replace. I sometimes use the number memorization trick to recall a specific line of code (by its line number) while I jump around a code file, or remember function names as I jump between files.
Agree with other answers, some of the more useful things you could focus on improving are:
Troubleshoot a problem, using the 'elimination' technique, basically eliminating problem areas, one by one, until you hit the right one
Quickly get to the resource/API/Information I need - Use Google, SO, CodePlex, Google code, Koders.com codesearch, Google codesearch, MSDN etc - Knowing what information lies where is enough to save time drastically
Avoid thrashing (stuck with a problem for too long, no results), once you've spent enough time on the problem, by giving others 'complete' and 'relevant' information on your problem you can help others help you
Finally, memorizing about theories in programming is not helpful, however just reading, listening to experts and podcasts, attending conferences can help great deal in 'access to information from memory'
HTH
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Do you think having a great memory is REQUIRED to be a great programmer?
I don't consider myself a great programmer but I do think I am decent. But my memory is REALLY bad so I find myself always having to remind myself how to do things. I mean I "know where to look" but sometimes it makes me feel like I am just a crappy programmer. What makes it even worse is that I am always forgetting where things are in my source code or what algorithm I used for certain situations.
Think back on the great programmers you have encountered in your life, didn't all of them seem to have amazing memories?
Surely apocrapful, but here's Einstein's number:
A reporter interviewed Albert
Einstein. At the end of the interview,
the reporter asked if he could have
Einstein's phone number so he could
call if he had further questions.
“Certainly” replied Einstein. He
picked up the phone directory and
looked up his phone number, then wrote
it on a slip of paper and handed it to
the reporter.
Dumbfounded, the reporter said, "You
are considered to be the smartest man
in the world and you can't remember
your own phone number?”
Einstein replied, “Why should I
memorize something when I know where
to find it?”
I have this coworker that writes really bad code that is incredibly hard to maintain. I've come to the conclusion that his problem is good memory. He's simply able to remember where he put what functionality. Therefore he doesn't have to write code that is self-explanatory. He simply remembers that crap. The rest of us have a really hard time figuring out his code.
I'm sure that good memory isn't that guys only problem. But I'm sure his code would improve if his memory got worse.
Treat your short term memory as stack (not static) and don't expect much more from it. I've come back to code that I wrote only a month ago and its almost like someone else wrote it .. it just takes a while to get back in the same zone.
I get teased, often for leaving comments for myself like breadcrumbs .. but it works. If I finish some function and say "AHA, that is absolutely BRILLIANT!", I immediately comment my complexity as I'm sure to forget.
So now, to answer a question with two questions:
What did you have for lunch last Wednesday?
What is the purpose for 'counter' in hash_foo() ?
At least, with #2, you can quickly go back and look / remember.
As long as you can remember how g-o-o-g-l-e is spelt, you're fine. :)
But seriously, you do need to keep several things in yoru short term memory at once. Longer term memory is I think less important. As long as you're aware that something exists, you've seen something before, etc then when it becomes relevant you'll know you can dig it up.
Experienced programmers can generally regurgitate APIs, minor details and so forth but in my experience this has never been a case of sitting down and memorizing things by rote. It's a natural consequence of using things again and again.
I would say the opposite, having a good memory may lead to writing code which only the author can understand, because she remembers the details of its logic. On the other hand I, having bad memory, document my code and write it as clear as possible.
Honestly, I've found poor memory to be an asset, even poor short term memory. Poor short term memory really forces you to break out the separation of concerns. The end result is very clean, very simple, very well encapsulated code. I actually have pretty good short term memory, but I've learned to try really hard not to employ it after a few experiences writing code while I was distracted enough that I couldn't really retain much at once. I was actually shocked to observe that the code was actually far cleaner than code I'd developed im the past.
Poor long term memory is an asset, because you end up training yourself very well on how to find and learn techniques, API's, algorithm's, etc. It also tends to encourage you to find a small set of common themes to guide you in your work.
All in all, the mark of a good programmer isn't complexity (which is really difficult to achieve without good memory), but simplicity (which by nature doesn't require much memory).
I can answer exactly this using just one word: NO. Having great memory to memorize all about programming is not a must. Experiences and the tedious learning by practice are the best.
I also have experienced this. If you have enough experience hours (or can be years) creating softwares with best practices applied, then you're a truly master on your own job or on programming languages you use to create softwares. Please don't be sad if you have bad memories, but striving to always learn and practice can defeat your memory weakness.
To me, there are two kinds of programmers in the world. The first were born to do it, the second learnt. In both groups they range from unbelievably poor to unbelievably great. Does memory denote those ratings? No, absolutely not. While a good memory can help you with learning, nothing helps you more than practice and understanding. After all, being able to remember the entire Encyclopaedia Brittanica means nothing without understanding. My server's storage is a classic example there.
Programming is about logic, both in the code and how you approach the problem. If you want clear, easy to understand code then chances are you'll break the problem into small manageable chunks (i.e. that fit in your head in their entirity) and work on each one. Each function then condenses down into a single command for your next stage of complexity. At the end of that next stage, if there is another, you'll have a set of single commands again to build on. Logical naming, logical partitioning, logical assembly... I think I'm getting my logical point across ;)
My memory is appalling, and I mean appalling. I can be introduced to 3 people and by the time #3's name is said I've forgotten who #1 is. I can still write some good code, not everytime or everyday; when you're in the zone it's something else, at that point it is art. So, put your memory to one side, get yourself either a really quiet space or a pink noise generator and dive in. The only thing that's going to make you a better programmer is practice, practice, practice. The only thing to remember is that programming is a skill and skills are practiced, and best practiced among friends who can give constructive criticism and advice... like Stack Overflow :)
Apologies for the tome level of this answer but I couldn't remember what I'd already written ;)
Having a good memory is quite useful but certainly not required. I would say that it's not that great programmers have a great memory but rather, they have spent a lot of time investigating even the littlest issues which improved their understanding and improves recall. If you spend 4 minutes resolving a problem (Googling or asking in SO) then you probably won't remember the resolution when you hit it again 4 months down the line. IT could be an evolutionary trait or just a bad memory =)
Good programmers also have well thought out principles which allows them to work on auto-pilot without second-guessing themselves. A good set of principles also achieves consistency and predictability (which is a quality of memory) through reinforcement.
This also extends into other domains. Chess grandmasters can recall an entire game played 40 years ago. That's because they remember patterns (openings, variations, root cause and effect of moves which led to the end game, etc). which helps group individual moves into units.
In software, tools can like auto-complete or having a KB/Wiki and searchable check-in history etc can help.
No. But maybe it can make you great...
An art of programming (maybe the art) is being able to approach problems in such a way that you can grasp the whole of them, despite your limitations (such as imperfect memory). This is because everyone - including the smartest of us - has limitations. Bumping into your limitations is not a sign that you have limitations, but a sign that you are reaching further.
This art (insofar as I know it) includes things like divide and conquer (using modules of various kinds, to match the shape of the problem); using standard techniques to telegraph your intention (idioms, OO Design Patterns are just one); separating out the core of the problem (this one is not about the code: it's about the problem); and of course comments.
I used to believe that good code was self-documenting (and even, that code is truth), but recently I'm writing parsers, and including the CFG in a comment is a very helpful reference, because it is a much simpler representation of the intention of the code.
A coder's gotta know their limitations. It's unrealistic to expect to have the same grasp of something months later, as when you were in the thick of it. All the above involve accepting that problem, and working on a solution. Not only does it make your code easier for you to grasp later on, it makes it easier for someone else to grasp... but most importantly I firmly believe that clearer and simpler code is fundamentally, and transcendentally, better code.
The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. - Dijkstra
In allusion to Edsger Dijkstra, a competent programmer is fully aware of the limited size of his own skull. The more details you don't cram in your head, the better you can tackle the problem at hand.
Modularize your code very much, refactor code, package your nifty algorithms to objects, and use those objects, in that way, you don't always have to "micro-remember" each and every implementation details of your programs.
No. The ability to forget about what you know and continue learning is at least equally important in the long term.
Good notes and bookmarks and web searches go a long way.
Remembering the really simple things is required for great programming. Things as simple as "keep at it".
An interesting perspective from the other side of your monitor: Locality of Reference
I think one benefit to having a good memory (modesty point: I have a good memory) is the ability to be able to think on your feet when not actually in front of a computer.
For example, you might be in a meeting when some new kind of functionality for your app is suggested. Can it be done? How long is it likely to take? These are questions which are easier to answer if you can pretty much walk through 250k loc in your head.
That said, I find a grain of truth in others opinions that my own code might be less clear because I can remember it better.
Some of the best-written code I've ever seen was written in such a way that each design decision was inevitable, and the code read as its own explanation. That's far better IMHO than code which requires the reader (or, worse, the maintainer) to keep tons of arbitrary detail memorized.
My own index of complexity in code is "How much stuff do I have to keep in mind to understand this one line?"
More is worse.
I think having good memory is helpful for learning new things quickly.
This doesn't mean it is a requirement for being a great programmer.
In fact it is more about intelligence rather than memory capabilities, but it's too much of a complex subject to be able to identify certain qualities and compare them with programming skills, and be able to retrieve any relevant info.
That is the mystery of the brain.
Occam's Razor suggests that a simpler theory is likelier to be true.
If code is a theory, describing inputs going to outputs, then shorter code, using expected idioms and libraries, is more likely to be "true" - that is, it is more likely to capture the essence of the solution, so it will generalize to inputs that you didn't expect.
Shorter, unsurprising code is easier to remember.
It all depends on what your good memory remembers..
I've worked on the same project for 10 years or so and I can't remember every line and who wrote it and why.
But... I can remember pretty much all the user requests and user issues. Who wanted what and when.
I can remember pretty much all the support issues we have had.
Finding old code is easy - we have great tools for that. Finding old issues is a much more abstract process: we have JIRA and Wikis but sometimes they don't cut it because they fail to provide the semantic meaning.
So. Pay attention to what REALLY matters and remember that.
Programmers with poor memory are like Universal Turing machines compared to practical computers: technically you can accomplish the same things by referring to information you or someone else has recorded somewhere... it's just that it may take a little longer....
I think it's possible to be able remember different types of things with differing degrees of aptitude.
For example, I sometimes find I have a pretty bad memory when it comes to random facts and figures, as well as things that I've done or will be doing - the latter meaning I find bug-tracking software an invaluable tool.
On the other hand, I can remember the structure of complex pieces of software I've written, and where to find specific things within that.
This may be about logical association. Well-designed software should (in theory) have a logical structure, which may make it easier to store in your memory if your brain is wired up that way.
Random piece of information, however, may not have these associations, making them harder to remember.
I would say its necessary for being great and fast. My memory for programming details is OK (but I have google for that). However, when I sit in front of applications that I've primarily written (~30-40 k lines of code) I'm able to load its structure almost completely into my memory. I can find the way I'm doing something in a couple of seconds and recall why I implemented it the way I did. That's invaluable. By 11 am I've been able to do more work than some others do all day. Now, that doesn't make me a great programmer, but it does make me an enormously productive productive programmer. This gives me time to refactor, write extra code, surf SO, grab an hour lunch, etc.
Just a simple comment "Repetition is the mother of learning", it doesn't matter if you have a good/bad memory. The function you use more in your programs you will remember the best. Also, in my case, i have the internet, when i don't remenber something i just ask, even if it is a bumd or easy question, and a lot of times I remember the answer after I post the question and then I quickly post the answer. The problem is how much time you put your mind to work....
:)
I think it depends. Memory for a programmer is very very important. Both short and long term. However, what you use that memory for is the important thing. As a programmer, if you're using it to memorize ever nuance of an API then I'd say you're wasting your memory.
Ultimately, I try to use my memory to remember the important things and anything that I can't easily find at a later point in time. I'll usually put API stuff in short term memory and use google and intellisense to help me with the specifics. Design patterns, methodologies, lessons learned from experience, on the other hand are usually what I try to put into long term memory so I can use it effectively in the future without having to relearn everything.
In short, yes a programmer needs a good memory...both long and short term. But they need to be wise in how to use that memory...and that, I think, makes the difference in a great programmer.
Having a strong enough memory to hold the things you need to use today is the important part. If you are constantly searching for answers to the same question, you probably have a weak memory.
The most important thing is remembering where you can find the answers. I will sometimes blog on topics that are a bit more complex so I have a place to find them when I need them. But I don't try to hold onto them perpetually, because I can search my own blog and find them later. I do the same with other people's blogs and I know which blogs to hit for certain types of answers.
When all else fails, Google it!
I used to think having a good memory was a time saver because the more you remembered, the less time you spent looking things up, but tools and IDE have got so good now, many things that I used to memorize like syntax and various code snippets are quickly available in a few keystrokes. That, and the fact that the amount of information in the field grows way faster than any mortal programmer can keep up with, makes me think memory is less important any more. More important, is having good access and organization to useful information.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I am doing some research into common errors and poor assumptions made by junior (and perhaps senior) software engineers.
What was your longest-held assumption that was eventually corrected?
For example, I misunderstood that the size of an integer is not a standard and instead depends on the language and target. A bit embarrassing to state, but there it is.
Be frank; what firm belief did you have, and roughly how long did you maintain the assumption? It can be about an algorithm, a language, a programming concept, testing, or anything else about programming, programming languages, or computer science.
For a long time I assumed that everyone else had this super-mastery of all programming concepts (design patterns, the latest new language, computational complexity, lambda expressions, you name it).
Reading blogs, Stack Overflow and programming books always seemed to make me feel that I was behind the curve on the things that all programmers must just know intuitively.
I've realized over time that I'm effectively comparing my knowledge to the collective knowledge of many people, not a single individual and that is a pretty high bar for anyone. Most programmers in the real world have a cache of knowledge that is required to do their jobs and have more than a few areas that they are either weak or completely ignorant of.
That people knew what they wanted.
For the longest time I thought I would talk with people, they would describe a problem or workflow and I would put it into code and automate it. Turns out every time that happens, what they thought they wanted wasn't actually what they wanted.
Edit: I agree with most of the comments. This is not a technical answer and may not be what the questioner was looking for. It doesn't apply only to programming. I'm sure it's not my longest-held assumption either, but it was the most striking thing I've learned in the 10 short years I've been doing this. I'm sure it was pure naivete on my part but the way my brain is/was wired and the teaching and experiences I had prior to entering the business world led me to believe that I would be doing what I answered; that I would be able to use code and computers to fix people's problems.
I guess this answer is similar to Robin's about non-programmers understanding/caring about what I'm talking about. It's about learning the business as an agile, iterative, interactive process. It's about learning the difference between being a programming-code-monkey and being a software developer. It's about realizing that there is a differnce between the two and that to be really good in the field, it's not just syntax and typing speed.
Edit: This answer is now community-wiki to appease people upset at this answer giving me rep.
That I know where the performance problem is without profiling
That I should have only one exit point from a function/method.
That nonprogrammers understand what I'm talking about.
That bugfree software was possible.
That private member variables were private to the instance and not the class.
I thought that static typing was sitting very still at your keyboard.
That you can fully understand a problem before you start developing.
Smart People are Always Smarter than Me.
I can really beat myself up when I make mistakes and often get told off for self-deprecating. I used to look up in awe at a lot of developers and often assumed that since they knew more than me on X, they knew more than me.
As I have continued to gain experience and meet more people, I have started to realise that oftentimes, while they know more than me in a particular subject, they are not necessarily smarter than me/you.
Moral of the story: Never underestimate what you can bring to the table.
For the longest time I thought that Bad Programming was something that happened on the fringe.. that Doing Things Correctly was the norm. I'm not so naive these days.
I thought I should move towards abstracting as much as possible. I got hit in the head major with this, because of too much intertwined little bits of functionality.
Now I try keep things as simple and decoupled as possible. Refactoring to make something abstract is much easier than predicting how I need to abstract something.
Thus I moved from developing the framework that rules them all, to snippets of functionality that get the job done. Never looked back, except when I think about the time I naively thought I would be the one developing the next big thing.
That women find computer programmers sexy...
That the quality of software will lead to greater sales. Sometimes it does but not always.
That all languages are (mostly) created equal.
For a good long while I figured that the language of choice didn't really make much of a difference in the difficulty of the development process and the potential for project success. This is definitely not true.
Choosing the right language for the job is as important/critical as any other single project decision that is made.
That a large comment/code ratio is a good thing.
It took me a while to realize that code should be self documenting. Sure, a comment here and there is helpful if the code can't be made clearer or if there's an important reason why something is being done. But, in general, it's better to spend that comment time renaming variables. It's cleaner, clearer and the comments don't get "out of sync" with the code.
That programming is impossible.
Not kidding, I always thought that programming was some impossible thing to learn, and I always stayed away from it. And when I got near code, I could never understand it.
Then one day I just sat down and read some basic beginner tutorials, and worked my way from there. And today I work as a programmer and I love every minute of it.
To add, I don't think programming is easy, it's a challenge and I love learning more and there is nothing more fun than to solve some programming problem.
"On Error Resume Next" was some kind of error handling
That programming software requires a strong foundation in higher math.
For years before I started coding I was always told that to be a good programmer you had to be good at advanced algebra, geometry, calculus, trig, etc.
Ten years later and I have only once had to do anything that an eighth grader couldn't.
That optimizing == rewriting in assembly language.
When I first really understood assembly (coming from BASIC) it seemed that the only way to make code run faster was to rewrite it in assembly. Took quite a few years to realize that compilers can be very good at optimization and especially with CPUs with branch prediction etc they can probably do a better job than a human can do in a reasonable amount of time. Also that spending time on optimizing the algorithm is likely to give you a better win than spending time converting from a high to a low level language. Also that premature optimization is the root of all evil...
That the company executives care about the quality of the code.
That fewer lines is better.
I would say that storing the year element of a date as 2 digits was an assumption that afflicted an entire generation of developers. The money that was blown on Y2K was pretty horrific.
That anything other than insertion/bubble sort was quite simply dark magic.
That XML would be a truly interoperable and human readable data format.
That C++ was somehow intrinsically better than all other languages.
This I received from a friend a couple of years ahead of me in college. I kept it with me for an embarrassingly long time (I'm blushing right now). It was only after working with it for 2 years or so before I could see the cracks for what they were.
No one - and nothing - is perfect, there is always room for improvement.
I believed that creating programs would be exactly like what was taught in class...you sit down with a group of people, go over a problem, come up with a solution, etc. etc. Instead, the real world is "Here is my problem, I need it solved, go" and ten minutes later you get another, leaving you no real time to plan out your solution efficiently.
I thought mainstream design patterns were awesome, when they were introduced in a CS class. I had programmed about 8 years as hobby before that, and I really didn't have solid understanding of how to create good abstractions.
Design patterns felt like magic; you could do really neat stuff. Later I discovered functional programming (via Mozart/Oz, OCaml, later Scala, Haskell, and Clojure), and then I understood that many of the patterns were just boilerplate, or additional complexity, because the language wasn't expressive enough.
Of course there are almost always some kind of patterns, but they are in a higher level in expressive languages. Now I've been doing some professional coding in Java, and I really feel the pain when I have to use a convention such as visitor or command pattern, instead of pattern matching and higher order functions.
For the first few years I was programming I didn't catch on that 1 Kbyte is technically 1024 bytes, not 1000. I was always a little perplexed by the fact that the sizes of my data files seemed slightly off from what I expected them to be.
That condition checks like:
if (condition1 && condition2 && condition3)
are performed in an unspecified order...
That my programming would be faster and better if I performed it alone.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This is one of those meta-programming questions that may or may not belong on SO, but here goes...
Have any other programmers out there noticed that their ability to communicate with people (technical or otherwise) almost disappears during and after a period of intense programming?
I normally think of myself as a relatively good communicator. However, last night after staying late to work on some relatively challenging programming tasks, I found even ordering a takeaway meal was very difficult: my words got tied up before they left my mouth. This is not the first time this has happened ...
Has anyone else experienced this phenomenon? Is there a name for it?
Yes, it's called fatigue.
This happens to me, to some extent, basically every workday. My girlfriend knows that when I'm in "robot mode" I'll be much less responsive to her subtle body-language cues and take longer to make spoken responses.
Some of it is just intense concentration, and fatigue caused by it, I'm sure; but it also makes sense to me that wrapping one's brain around languages that are shaped around the needs and limitations of machines makes one less adept, at least temporarily, at those languages shaped around the needs and limitations of people.
Although fatigue is definitely a component, I've experienced this phenomenon after any task that requires intense concentration and does not involve communication with another person. It's intensified if the task is repetitious or taxes short-term memory, such as remembering intermediate results while following several paths of logic. Non-programming examples include solving math problems; comparing intricate, competing strategies; and organizing a year's worth of paper receipts by date, account, and category.
My guess is that these tasks encourage "internal" communication, which doesn't necessarily require to you express your thoughts as words, and certainly not in organized sentences. It's more efficient for your brain to take "shortcuts" that wouldn't be possible if you had to describe your thoughts to another person in a logical, orderly fashion. And as you become engrossed in the task you become focused exclusively on it, losing awareness of time, environmental and physical conditions, and the "chatter" that normally occurs in your head when you're aware of your "self." I imagine something similar happens to athletes when they hit their "stride," though I'm woefully at a loss to know from experience. :-)
For me this is a very comfortable state, as I enjoy focusing on a problem and navigating to the solution. If I'm forced back to "reality" without a few minutes of transition, it's like waking from a vivid dream, and I don't communicate at my best until the normal, social, thought processes resume.
This also happens, though to a far lesser degree, when my wife and I explain things to each other: we each tend to assume a lot of background and understanding on the other's part, and we therefore omit a lot of details and "incidentals" that we'd include if we were talking to anyone else. When we're "in tune" with each other it's easy, efficient, and creates tremendous synergy; when we assume too much understanding it can be terribly frustrating and leave each of us wondering how the other could possibly be so dense. :-)
I've noticed that extended periods of deep concentration on programming problems have sometimes caused me to struggle with both verbal and written communication. It becomes noticeable when I first start struggling to find recall words and phrases that normally come easily to me.
my theory: all of my short-term memory is tied up in nonverbal concepts; saying something requires me to perform a very expensive context-switch (or 'paging' operation, if you will)
staring and grunting is about all i can manage sometimes
When my communication skills drop, I find that it's generally in tandem with my programming skills also falling, generally (as others have noted) due to fatigue.
But when I've been programming intensely I find that my general level of communication skills is honed -- I speak, listen and argue with more intensity, certainly about the general space I'm working in but even about other things. It's like thinking hard about one problem puts me in the mode of thinking hard about everything.
I've even found that the best way to write technical documents -- which I generally dislike doing -- is by doing some interesting coding, even if it's prototyping or experimental or otherwise throw-away, to put me in the right mode and just get my brain working.
I'd think there are a few questions to ask here:
1) Did you order verbally, on-line or through handwritten notes? If you did the first one then it may be that your mind can have trouble switching gears which can be understandable if you really got into a zone where your reflexes were optimized for typing this and that rather than explaining how to order a pizza, for example.
2) Did you really take a break before getting the meal or was it part of a quick, "Ok I'm going to go and get this, this and this done now and then I'll be back to finish this off," mentality? I've done the latter many times and usually it is just a sign that my mind is focused on that programming task rather than the other things around me.
3) How alert were you when you did order? Fatigue is certainly another possible factor, combined with being up at an irregular hour.
4) How long did you spend programming before going out? If it was more than a few hours, e.g. 3, then I could see it if you tend to optimize what you are doing at any moment, e.g. when you are programming, do you try to optimize where the mouse, keyboard and monitor all are?
Those would be a few areas I'd look into. Maybe you just have an intense adaptibility you are just learning you have. :)
Language skills are usually situated in the left hemisphere of the brain.
The feeling I get when I'm "in the zone" is similar to the right-brained feelings I get when I draw.
I conclude that programming is more of a right-brained activity for me.
Betty Edwards's "Drawing On The Right Side Of The Brain" is a terrific book about the brain and drawing. That's where I learned how to make that switch.
SYN
leads to...
ACK
Or, maybe...
NACK
That!
Is the question!
the more i code, the more f-bombs I say to the computer.
The importace of excellent communication skills today is immense, all of the most successful people born in this earth were well versed with the powerful communication skill. It is an art which can be acquired if one is willing to spend a few hours on himself and rocognize, it's hidden skils.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm preparing to teach someone to program. When I learned the course material, I used turtle graphics for the first few exercises. In reading introductory textbooks, I have not found one that uses the technique. Did others find this approach helpful? If not, what is a better way to learn to program?
I think it depends on age of the target group.
If they are children (I would say up to 12-14 years), doing any easy graphics is a good way to motivate them; on the other hand, don't expect them to learn much about real programming or algorithms.
If they are teens (14-18), it's perhaps still good to use some algorithms that give pretty results (for example 3D or fractals), but since they are older and capable of more abstract thinking, I don't think 2D turtle graphics is interesting enough.
If they are older, doing any graphics is a distraction. At that age, they should have enough inner motivation to learn without anything fancy.
To sum up, I think that fancy graphics serves more motivational role (that you see what you did fast, and it's easy to show others what can you do with a computer) than learning role (that it would make learning real programming easier).
In the late 80s, before I was programming in C, I was programming in Applesoft BASIC and Logo. As a child I thought the turtle was great because it make programming simple. If I decide to teach my children Logo I will probably start here to get an actively developed Logo interpreter.
The key thing about LOGO is user-defined functions. It is very good at conveying that, as long as you emphasize it. Show interactively how to draw a square, then make a new word called square. Then show how you can draw patterns using square. Then make those patterns into words, and so on.
You could do worse in teaching programming than using a tool like Scratch. It's a drag and drop programming interface and can be used to teach basic concepts of programming with some fun visual results (as can be seen from the gallery on their website).
Rob
Logo gave me a very clear picture (no pun intended) on how recursive functions would work, and since I was doing assembly programming at the time, the need to return to the previous state when returning to a method became very clear with Logo.
Recursive implementations of things where also very easy to see the effect of.
I wrote script/code in a c-like dialect for a game called Doom2 before I knew what programming was, so when it came to seriously learning about concepts such as pointers, inheritance and polymorphism I found the basics a breeze because I could construct a mental model to not only help me understand, but also appreciate how cool things like pointers and arrays are.
A friend of mine is a good programming student, but he gets frustrated when he can't visualize an algorithm working, when I was starting to help other students I found they had the same problem, if they can't see something working it's harder to appreciate as a fledgling programmer, the same friend eloquently suggested I "Show 'em some crazy pimp shit and then show them how it's done". He's right, even if someone really wants to learn something they'll be able to draw on more mental energy if they think what they're learning lets them do awesome things.
My best bit of advice is this: AT THE START SPEND AS LITTLE TIME PROGRAMMING TO THE CONSOLE AS POSSIBLE
It makes you feel constrained and your efforts appear futile, only after you appreciate it as a front end should it be used for learning to program. I wouldn't use logo myself because I don't think it can teach concepts such as the aforementioned polymorphism or inheritance nearly as well as other methods, I know a friend of mine is teaching a teenager how to program using XNA in a wrapper, I think anything that can let you blit an image to the screen is fine. That way you can see why you'd want an abstract base class called EnemyEntity with behavior that's inherited by zombie and dog etc. It's not that the concepts are hard to understand, it's just that at first they're hard to appreciate.
I could go on but I think that puts across what I've learned by teaching others. I think using graphics in teaching programming allows students to gain the ability to build mental models of intangible concepts faster than any other.
XNA If you want to teach C# that's an amazing graphics library, just write a wrapper sprite class to hide as much complexity when first starting out and teaching concepts.
SDL A lower level library if you're going to start with c++
During one of my first-year computer science papers we used Java to create fractal patterns via a turtle object.
It was pretty fun to see visually whether or not we had correctly implemented the algorithm required to produce a certain pattern. However, so answer the main question, I wouldn't say that programming via a turtle is useful. I'd say the best way to teach someone to program is to get them to build their own app to do whatever they want it to do. This gives them creative control, plus if they get stuck they can learn how to resolve a problem.
I strongly suggest to start with a interpreted language like Logo (not compiled) because of the quality of the error messages. Reading error messages is very important in this process. Also, at the easy level, Logo allows you to run your instructions one by one in direct mode and carry them to your procedures when you get the expected results.
# Alex: MicroWorlds is a commercial version of Logo and it does exist in English, Spanish, Portuguese, Italian, Russian, etc. it's a big plus if you are not a native English-speaking person.
LOGO is not only Turtle-Graphics.
There are also other interesting concepts in it which come from LISP.
'Turtle' is just icing on the cake and the "imperative" side of Logo.
:)
I learned to program in BASIC by writing simple programs drawing faces (I mean circles and squares) on the screen. Somehow the whole turtle programming was never my thing, although a few of my friends learned that way. Later on I moved to Pascal, then to Delphi, Java and C++/C#.
In my opinion the trick is to "wow" your student and impress/empower with potential things that you can accomplish by writing your own programs. I would actually demonstrate some GUI programming or game programming. It's much easier to learn the basics by keeping the end goal in mind.
Recently I came across SmallBasic - a cool programming environment for kids designed to teach concepts. I would give that a try. It comes with a pretty complete paper describing how to use it.
When I got my first computer (VIC-20) and started programming it was very hard to explain to my parents what I was doing.
My mother tok a course in computing preparing for a project of computerizing the library she worked in. They had a couple of classes introducing them to programming. After learning LOGO she came home and said that she suddenly understood what I was into.
So LOGO with turtle graphics brought us closer together!
I did a "computing for kids" course in the late eighties, and there was an extensive section on turtle graphics using logo. In all honesty I was bored to tears, and learned virtually nothing from it.
I think "programming the turtle" might work better for someone who is artistically inclined, or hugely into geometry, but by and large, there are far more interesting problems to attack, even for kids.
Ah, the memories of good old Logo. I think I got more of a geometry lesson than a programming lesson out of it, e.g. figuring out how much to turn at various points to produce a particular shape, design or pattern. It may work if you plan on mixing geometry with the programming, but if the person doesn't have the basics of geometry, e.g. what is a square and how is it different from other 4-sided shapes, what is a triangle, etc.
I used logo and turtle at school too, a great introduction.
It looks like our kids will be getting a slightly updated interface with Microsoft Kodu. It looks very impressive. It's an icon based programming language made for creating games that runs on X-Box Live.
I'm currently learning python and using a little bit of turtle. In labs we haven't used it, but our homework does. It's nice to know it exists, and it's a good way to get certain commands and syntax in. Overall I don't feel it was completely necessary though.
When I was young, I found it very interesting. It was one of the first programming languages that I've learned, even though I've used it for about two days. It started my interest in programming.
Nowadays, I think the syntax is a bit unclear because most statements are abbreviations. Nowadays, computers are far more powerful thus the language could profit from clearer statement. Another factor is the native language of the person who is learning to use it. If English is not your native language then Logo becomes a bit more complex to understand. So if you're teaching Logo to children, make sure they're familiar with English terms first. (Quite easy if you're a native English-speaking person. More complex if you're originally Dutch, German, French, Portuguese. Even more complex if you're Russian or Chinese because you'd have to adjust to a different character set too.)
I have just begun teaching my 7-year-old how to program using Logo, and he is having a load of fun with it. The commands are easy enough for his limited reading ability and he just loves drawing cool pictures using the turtle graphics. I was amazed at how well he retained what he had learned using it, so I feel it was a good choice for his age.
For older kids (or adults) other languages might have more advantages as a beginner language though
Personal experience, YMMV...
My first encounter with a computer was turtle graphics in my early teens. I loved and was immediately hooked. (Perhaps because for the first time someone [something] did exactly what I told it to do?)
The visual and instant feedback made me want to do more and more. I really wanted to figure out how to replicate the pictures I saw in the book I was using. Without me even classifying it as "work", it slowly built up my early programming skills and my confidence I could learn on my own.
I credit it with sending me in the path I'm in today, a happy software developer who can't believe I get paid to do this work (I know, I know - all corporate snickering aside, I like my work).
As I said, YMMV.