What can Expect do that Pexpect can not do? - comparison

I am considering to start using Pexpect. On Pexpects homepage I find this:
Q: Why not just use Expect?
A: I love
it. It's great. I has bailed me out of
some real jams, but I wanted something
that would do 90% of what I need from
Expect; be 10% of the size; and allow
me to write my code in Python instead
of TCL. Pexpect is not nearly as big
as Expect, but Pexpect does everything
I have ever used Expect for.
There is a 10% difference between Pexpect and Expect. So my question is what is this 10% difference. What is it that Expect can do that Pexpect can't.

That question ("What is it that Expect can do that Pexpect can't") is a bit misleading. It's not that Pexpect can't do things that Expect can; it's that Expect has a lot of extra support to make this kind of programming easier.
As an example, take the interact command which lets the user interact directly with the spawned process. In Pexpect, that's all interact does. (And that may be sufficient for your needs, as you say.) In contrast, Expect's interact has support for detecting patterns during an interact, hooking together multiple spawned processes, etc. Of course, you can do all this by coding it yourself. But your code will be longer - sometimes a lot longer because you'll essentially have to rewrite your own interact, you'll have to debug it, etc. In fact, you may have encountered these situations already but not realized how much simpler the equivalent Expect code would be.
Of course, the extra support may be more than offset by your preference for Python. :-P

Related

Erlang Hot Code Loading not widely used?

I just saw this 2012 video from LinuxConf.au about Erlang in production.
There's a part on the video where Bernard says no big Erlang projects use Hot Code Loading apart from Ericsson, because it's really hard to guarantee things will work. It's around minute 29.
Is that still true? Are there tools to help test a hot code load or or make it easier nowadays?
This is not true. Every Erlang user uses hot code loading to his advantage in one way or another -- whether it is for development, testing, troubleshooting, one-off fixes, or full scale deployments. This is one of the major Erlang advantages. Rather unique too.
For example, WhatsApp, one of the biggest Erlang users, relies on hot code loading for almost all code pushes.
I have personally worked with hot code loading in scenarios where each change was well understood and often performed by the same person who made the change. It works extremely well and good engineers don't have any problems doing this. Speaking of tools, loading modules one by one from Erlang shell using l(...). or all at once using l(). (see here) works just fine. Some prefer release-based tools like relx.
Others, like Ericsson, use enterprise-style deployments with hot code loading after rigorous testing of clear-cut releases and patches. The goal here is to upgrade without using spare capacity and special procedures for draining and shifting load. Operationally this may be simpler and more efficient than restarts, but testing can be more expensive.
It is difficult to know whether it is a feature widely or scarcely used. Nowadays there are plenty of Erlang systems out there. I can however think of reasons why and why not to use it, since I have been working with bot options for quite some time.
In favour of using it:
It is obviously quite useful during development to ensure a fast feedback cycle. I always develop with an open shell, and with functions to load code automatically as son as compile.
In the rare case you need to implement a monolithic application with high availability requirements, it is basically the only option
The main reason not to use it, as the presentation states: it is hard. Even if you manage to understand exactly how it works (which is not the hardest part).
It is not, in my opinion just a problem of tooling, but rather that you are getting a lot of intrinsic complications just by the fact that now your code is part of the mutable running state of the system. You basically end up having a long running system that changes behaviour, so you add those to the problems you already had:
You are no longer sure that restarting the system will not change behaviour on any fundamental way. So you will probably need to put extra care on making sure that whatever code you load, it is also written to disk.
Changing the way your modules work (i.e. loading new code) is very tricky unless a) you never break compatibility, b) you somehow figure out the order in which the modules should change or c) you assume the worst that can happen is a few crashes due to undefined functions, function or case clauses, etc, and hope for the best (the actual worst is when the new and old modules interact in unexpected ways while you haven't finished loading all of the new ones and the actually run some impossible logic).
You will almost certainly will end up killing some process running old code when loading new code at some point. Maybe your supervisors will help you, maybe not. In any case that could be very confusing and difficult to debug.
As the presentation also states, is very hard to test (if not impossible).
Etc.
Adding to all those, you are running a long living server with long living state, which is far from ideal.
So my advise is always that, if you could get away with a distributed application and rolling upgrades, you should do it. That option is much easier to handle, and in my experience, performs better overall.

llvm based code mutation for genetic programming?

for a study on genetic programming, I would like to implement an evolutionary system on basis of llvm and apply code-mutations (possibly on IR level).
I found llvm-mutate which is quite useful executing point mutations.
As far as I have understood, the instructions get count/numbered, one can then e.g. delete a numbered instruction.
However, introduction of new instructions seems to be possible as one of the availeable statements in the code.
Real mutation however would allow to insert any of the allowed IR instructions, irrespective of it beeing used in the code to be mutated.
In addition, it should be possible to insert library function calls of linked libraries (not used in the current code, but possibly available, because the lib has been linked in clang).
Did I overlook this in the llvm-mutate or is it really not possible so far?
Are there any projects trying to /already have implement(ed) such mutations for llvm?
llvm has lots of code analysis tools which should allow the implementation of the afore mentioned approach. llvm is huge, so I'm a bit disoriented. Any hints which tools could be helpful (e.g. getting a list of available library functions etc.)?
Thanks
Alex
Very interesting question. I have been intrigued by the possibility of doing binary-level genetic programming for a while. With respect to what you ask:
It is apparent from their documentation that LLVM-mutate can't do what you are asking. However, I think it is wise for it not to. My reasoning is that any machine-language genetic program would inevitably face the "Halting Problem", e.g. it would be impossible to know if a randomly generated instruction would completely crash the whole computer (for example, by assigning a value to a OS-reserved pointer), or it might run forever and take all of your CPU cycles. Turing's theorem tells us that it is impossible to know in advance if a given program would do that. Mind you, LLVM-mutate can cause for a perfectly harmless program to still crash or run forever, but I think their approach makes it less likely by only taking existing instructions.
However, such a thing as "impossibility" only deters scientists, not engineers :-)...
What I have been thinking is this: In nature, real mutations work a lot more like LLVM-mutate that like what we do in normal Genetic Programming. In other words, they simply swap letters out of a very limited set (A,T,C,G) and every possible variation comes out of this. We could have a program or set of programs with an initial set of instructions, plus a set of "possible functions" either linked or defined in the program. Most of these functions would not be actually used, but they will be there to provide "raw DNA" for mutations, just like in our DNA. This set of functions would have the complete (or semi-complete) set of possible functions for a problem space. Then, we simply use basic operations like the ones in LLVM-mutate.
Some possible problems though:
Given the amount of possible variability, the only way to have
acceptable execution times would be to have massive amounts of
computing power. Possibly achievable in the Cloud or with GPUs.
You would still have to contend with Mr. Turing's Halting Problem.
However I think this could be resolved by running the solutions in a
"Sandbox" that doesn't take you down if the solution blows up:
Something like a single-use virtual machine or a Docker-like
container, with a time limitation (to get out of infinite loops). A
solution that crashes or times out would get the worst possible
fitness, so that the programs would tend to diverge away from those
paths.
As to why do this at all, I can see a number of interesting applications: Self-healing programs, programs that self-optimize for an specific environment, program "vaccination" against vulnerabilities, mutating viruses, quality assurance, etc.
I think there's a potential open source project here. It would be insane, dangerous and a time-sucking vortex: Just my kind of project. Count me in if someone doing it.

how you design prototypes in erlang?

in the early phase of design of erlang small app - how do you do prototyping?
Is it better to first prototype without OTP just to prove all main mechanics in plain erlang and in further elaboration add what OTP offers with refined requirements / aspects or use OTP from the beginning?
(The answer below is not trying to plug my instructional, it just happens to apply directly to the OP's question; were it possible I would just send the OP a private message or email. At the time of this answer my demonstration system is only barely worth even reading, aside from basic architecture concepts.)
I start with a slew of function stubs. I do this in most languages (even something like this in assembler). The special thing about this in Erlang is that my initial stubs represent supervisors or logical managers, not one-off solutions to elements of my fundamental problem.
Beyond that, I like to do something most people abhor these days: talking the problem out in prose to discover inconsistencies in the way I view the problem. I've just started on an example of this here (as in, I'm still working on this before and after work daily as of today, 2014.11.06): http://zxq9.com/erlmud.
Some system stubs (conceptual, not OTP -- which is integral to the idea I'm trying to demonstrate in the project, actually) are here: https://github.com/zxq9/erlmud/tree/be7c6a8ae0d91aac37850083091ae4d15f1369a4/erlmud-0.1 for example. Over the next few days they will change significantly until there is a prototype system that works instead of just stubs. If you're really curious about this, follow the commits from the one I linked over the next two weeks or so (paid work schedule permitting, of course).
One positive thing I've noticed about prototyping with stubs and not jumping straight into OTP behaviors is that very often the behavior that is assumed to be a proper fit for a component turns out not to be. There are many cases where I anticipate I will want a gen_server, but after writing some stubs and messing around a bit I find myself beginning to manually implement an FSM. Sometimes that happens in reverse, too, I think I need an FSM and wind up writing a server, or realize I could benefit from a proper gen_event. Once you've ironed out what you're doing it is pretty easy to convert pure Erlang into OTP. It is much less easy to edit your mental model of how a component works once you've written a gen_fsm or gen_server, because you start to feel invested in the idea of thinking of it in OTP terms prematurely.
Remember: typing is the easy part, the real battle is figuring out what to type. So begin boldly by writing executable stubs and toy with them.
There is no special recipe to do prototypes in Erlang. How would you do a prototype in Java, C#, Scala, (put any language here) ?
When prototyping, you need to achieve your proof of concept as fast as possible and deliver a minimal vital project.
In your case, does OTP helps you to deliver your minimal vital project or not?
If yes, then use it. And of course don't use it if it isn't.
Are you familiar with OTP concepts in the first place? If not, then you need to learn them. And thats mean that you need to invest more time in learning OTP. Is that ok for your prototyping purpose?
I'm only trying to highlight the fact that prototyping in Erlang isn't different from any other language.

How to approach learning a new SDK/API/library?

Let's say that you have to implement some functionality that is not trivial (it will take at least 1 work week). You have a SDK/API/library that contains (numerous) code samples demonstrating the usage of the part of the SDK for implementing that functionality.
How do you approach learning all the samples, extract the necessary information, techniques, etc. in order to use them to implement the 'real thing'. The key questions are:
Do you use some tool for diagramming of the control flow, the interactions between the functions from the SDK, and the sample itself? Which kind of diagrams do you find useful? (I was thinking that the UML sequence diagram can be quite useful together with the debugger in this case).
How do you keep the relevant and often interrelated information about SDK/API function calls, the general structure and calls order in the sample programs that have to be used as a reference - mind maps, some plain text notes, added comments in the samples code, some refactoring of the sample code to suit your personal coding style in order to make the learning easier?
Personally I use the prototyping approach. Keep development to manageable iterations. In the beginning, those iterations are really small. As part of this, don't be afraid to throw code away and start again (everytime I say that somewhere a project manager has a heart attack).
If your particular task can't easily or reasonably be divided into really small starting tasks then start with some substitute until you get going.
You want to keep it as simple as you can (the proverbial "Hello world") just to familiarize yourself with building, deploying, debugging, what error messages look like, the simple things that can and do go wrong in the beginning, etc.
I don't go as far as using a diagramming tool sorry (I barely see the point in that for my job).
As soon as you start trying things you'll get the hang of it, even if in the beginning you have no idea of what's going on and why what you're doing works (or doesn't).
I usually compile and modify the examples, making them fit something that I need to do myself. I tend to do this while using and annotating the corresponding documents. Being a bit old school, the tool I usually use for diagramming is a pencil, or for the really complex stuff two or more colored pens.
I am by no means a seasoned programmer. In fact, I am learning C++ and I've been studying the language primarily from books. When I try to stray from the books (which happens a lot because I want to start contributing to programs like LibreOffice), for example, I find myself being lost. Furthermore, when I'm using functionality of the library, my implementations are wrong because I don't really understand how the library was created and/or why things need to be done that way. When I look at sample source code, I see how something is done, but I don't understand why it's done that way which leads to poor design of my programs. And as a result, I'm constantly guessing at how to do something and dealing with errors as I encounter them. Very unproductive and frustrating.
Going back to my book comment, two books which I have ready from cover to cover more than once are Ivor Horton's Beginning Visual C++ 2010 and Starting Out with C++: Early Objects (7th Edition). What I really loved about Ivor Horton's book is that it contained thorough explanation of why something needs to be done a certain way. For example, before any Windows programming began, lots of explanation about how Windows works was given first. Understanding how and why things work a certain way really helps in how I develop software.
So to contribute my two pennies towards answering your question. I think the best approach is to pick up well written books and sit down and begin learning about that library, API, SDK, whatever in a structured approach that offers real-world examples along with explanations as to how and why things are implemented as they are.
I don't know if I totally missed your question, but I don't think I did.
Cheers!
This was my first post on this site. Don't rip me too hard. (:

is a great memory a requirement for great programming [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Do you think having a great memory is REQUIRED to be a great programmer?
I don't consider myself a great programmer but I do think I am decent. But my memory is REALLY bad so I find myself always having to remind myself how to do things. I mean I "know where to look" but sometimes it makes me feel like I am just a crappy programmer. What makes it even worse is that I am always forgetting where things are in my source code or what algorithm I used for certain situations.
Think back on the great programmers you have encountered in your life, didn't all of them seem to have amazing memories?
Surely apocrapful, but here's Einstein's number:
A reporter interviewed Albert
Einstein. At the end of the interview,
the reporter asked if he could have
Einstein's phone number so he could
call if he had further questions.
“Certainly” replied Einstein. He
picked up the phone directory and
looked up his phone number, then wrote
it on a slip of paper and handed it to
the reporter.
Dumbfounded, the reporter said, "You
are considered to be the smartest man
in the world and you can't remember
your own phone number?”
Einstein replied, “Why should I
memorize something when I know where
to find it?”
I have this coworker that writes really bad code that is incredibly hard to maintain. I've come to the conclusion that his problem is good memory. He's simply able to remember where he put what functionality. Therefore he doesn't have to write code that is self-explanatory. He simply remembers that crap. The rest of us have a really hard time figuring out his code.
I'm sure that good memory isn't that guys only problem. But I'm sure his code would improve if his memory got worse.
Treat your short term memory as stack (not static) and don't expect much more from it. I've come back to code that I wrote only a month ago and its almost like someone else wrote it .. it just takes a while to get back in the same zone.
I get teased, often for leaving comments for myself like breadcrumbs .. but it works. If I finish some function and say "AHA, that is absolutely BRILLIANT!", I immediately comment my complexity as I'm sure to forget.
So now, to answer a question with two questions:
What did you have for lunch last Wednesday?
What is the purpose for 'counter' in hash_foo() ?
At least, with #2, you can quickly go back and look / remember.
As long as you can remember how g-o-o-g-l-e is spelt, you're fine. :)
But seriously, you do need to keep several things in yoru short term memory at once. Longer term memory is I think less important. As long as you're aware that something exists, you've seen something before, etc then when it becomes relevant you'll know you can dig it up.
Experienced programmers can generally regurgitate APIs, minor details and so forth but in my experience this has never been a case of sitting down and memorizing things by rote. It's a natural consequence of using things again and again.
I would say the opposite, having a good memory may lead to writing code which only the author can understand, because she remembers the details of its logic. On the other hand I, having bad memory, document my code and write it as clear as possible.
Honestly, I've found poor memory to be an asset, even poor short term memory. Poor short term memory really forces you to break out the separation of concerns. The end result is very clean, very simple, very well encapsulated code. I actually have pretty good short term memory, but I've learned to try really hard not to employ it after a few experiences writing code while I was distracted enough that I couldn't really retain much at once. I was actually shocked to observe that the code was actually far cleaner than code I'd developed im the past.
Poor long term memory is an asset, because you end up training yourself very well on how to find and learn techniques, API's, algorithm's, etc. It also tends to encourage you to find a small set of common themes to guide you in your work.
All in all, the mark of a good programmer isn't complexity (which is really difficult to achieve without good memory), but simplicity (which by nature doesn't require much memory).
I can answer exactly this using just one word: NO. Having great memory to memorize all about programming is not a must. Experiences and the tedious learning by practice are the best.
I also have experienced this. If you have enough experience hours (or can be years) creating softwares with best practices applied, then you're a truly master on your own job or on programming languages you use to create softwares. Please don't be sad if you have bad memories, but striving to always learn and practice can defeat your memory weakness.
To me, there are two kinds of programmers in the world. The first were born to do it, the second learnt. In both groups they range from unbelievably poor to unbelievably great. Does memory denote those ratings? No, absolutely not. While a good memory can help you with learning, nothing helps you more than practice and understanding. After all, being able to remember the entire Encyclopaedia Brittanica means nothing without understanding. My server's storage is a classic example there.
Programming is about logic, both in the code and how you approach the problem. If you want clear, easy to understand code then chances are you'll break the problem into small manageable chunks (i.e. that fit in your head in their entirity) and work on each one. Each function then condenses down into a single command for your next stage of complexity. At the end of that next stage, if there is another, you'll have a set of single commands again to build on. Logical naming, logical partitioning, logical assembly... I think I'm getting my logical point across ;)
My memory is appalling, and I mean appalling. I can be introduced to 3 people and by the time #3's name is said I've forgotten who #1 is. I can still write some good code, not everytime or everyday; when you're in the zone it's something else, at that point it is art. So, put your memory to one side, get yourself either a really quiet space or a pink noise generator and dive in. The only thing that's going to make you a better programmer is practice, practice, practice. The only thing to remember is that programming is a skill and skills are practiced, and best practiced among friends who can give constructive criticism and advice... like Stack Overflow :)
Apologies for the tome level of this answer but I couldn't remember what I'd already written ;)
Having a good memory is quite useful but certainly not required. I would say that it's not that great programmers have a great memory but rather, they have spent a lot of time investigating even the littlest issues which improved their understanding and improves recall. If you spend 4 minutes resolving a problem (Googling or asking in SO) then you probably won't remember the resolution when you hit it again 4 months down the line. IT could be an evolutionary trait or just a bad memory =)
Good programmers also have well thought out principles which allows them to work on auto-pilot without second-guessing themselves. A good set of principles also achieves consistency and predictability (which is a quality of memory) through reinforcement.
This also extends into other domains. Chess grandmasters can recall an entire game played 40 years ago. That's because they remember patterns (openings, variations, root cause and effect of moves which led to the end game, etc). which helps group individual moves into units.
In software, tools can like auto-complete or having a KB/Wiki and searchable check-in history etc can help.
No. But maybe it can make you great...
An art of programming (maybe the art) is being able to approach problems in such a way that you can grasp the whole of them, despite your limitations (such as imperfect memory). This is because everyone - including the smartest of us - has limitations. Bumping into your limitations is not a sign that you have limitations, but a sign that you are reaching further.
This art (insofar as I know it) includes things like divide and conquer (using modules of various kinds, to match the shape of the problem); using standard techniques to telegraph your intention (idioms, OO Design Patterns are just one); separating out the core of the problem (this one is not about the code: it's about the problem); and of course comments.
I used to believe that good code was self-documenting (and even, that code is truth), but recently I'm writing parsers, and including the CFG in a comment is a very helpful reference, because it is a much simpler representation of the intention of the code.
A coder's gotta know their limitations. It's unrealistic to expect to have the same grasp of something months later, as when you were in the thick of it. All the above involve accepting that problem, and working on a solution. Not only does it make your code easier for you to grasp later on, it makes it easier for someone else to grasp... but most importantly I firmly believe that clearer and simpler code is fundamentally, and transcendentally, better code.
The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. - Dijkstra
In allusion to Edsger Dijkstra, a competent programmer is fully aware of the limited size of his own skull. The more details you don't cram in your head, the better you can tackle the problem at hand.
Modularize your code very much, refactor code, package your nifty algorithms to objects, and use those objects, in that way, you don't always have to "micro-remember" each and every implementation details of your programs.
No. The ability to forget about what you know and continue learning is at least equally important in the long term.
Good notes and bookmarks and web searches go a long way.
Remembering the really simple things is required for great programming. Things as simple as "keep at it".
An interesting perspective from the other side of your monitor: Locality of Reference
I think one benefit to having a good memory (modesty point: I have a good memory) is the ability to be able to think on your feet when not actually in front of a computer.
For example, you might be in a meeting when some new kind of functionality for your app is suggested. Can it be done? How long is it likely to take? These are questions which are easier to answer if you can pretty much walk through 250k loc in your head.
That said, I find a grain of truth in others opinions that my own code might be less clear because I can remember it better.
Some of the best-written code I've ever seen was written in such a way that each design decision was inevitable, and the code read as its own explanation. That's far better IMHO than code which requires the reader (or, worse, the maintainer) to keep tons of arbitrary detail memorized.
My own index of complexity in code is "How much stuff do I have to keep in mind to understand this one line?"
More is worse.
I think having good memory is helpful for learning new things quickly.
This doesn't mean it is a requirement for being a great programmer.
In fact it is more about intelligence rather than memory capabilities, but it's too much of a complex subject to be able to identify certain qualities and compare them with programming skills, and be able to retrieve any relevant info.
That is the mystery of the brain.
Occam's Razor suggests that a simpler theory is likelier to be true.
If code is a theory, describing inputs going to outputs, then shorter code, using expected idioms and libraries, is more likely to be "true" - that is, it is more likely to capture the essence of the solution, so it will generalize to inputs that you didn't expect.
Shorter, unsurprising code is easier to remember.
It all depends on what your good memory remembers..
I've worked on the same project for 10 years or so and I can't remember every line and who wrote it and why.
But... I can remember pretty much all the user requests and user issues. Who wanted what and when.
I can remember pretty much all the support issues we have had.
Finding old code is easy - we have great tools for that. Finding old issues is a much more abstract process: we have JIRA and Wikis but sometimes they don't cut it because they fail to provide the semantic meaning.
So. Pay attention to what REALLY matters and remember that.
Programmers with poor memory are like Universal Turing machines compared to practical computers: technically you can accomplish the same things by referring to information you or someone else has recorded somewhere... it's just that it may take a little longer....
I think it's possible to be able remember different types of things with differing degrees of aptitude.
For example, I sometimes find I have a pretty bad memory when it comes to random facts and figures, as well as things that I've done or will be doing - the latter meaning I find bug-tracking software an invaluable tool.
On the other hand, I can remember the structure of complex pieces of software I've written, and where to find specific things within that.
This may be about logical association. Well-designed software should (in theory) have a logical structure, which may make it easier to store in your memory if your brain is wired up that way.
Random piece of information, however, may not have these associations, making them harder to remember.
I would say its necessary for being great and fast. My memory for programming details is OK (but I have google for that). However, when I sit in front of applications that I've primarily written (~30-40 k lines of code) I'm able to load its structure almost completely into my memory. I can find the way I'm doing something in a couple of seconds and recall why I implemented it the way I did. That's invaluable. By 11 am I've been able to do more work than some others do all day. Now, that doesn't make me a great programmer, but it does make me an enormously productive productive programmer. This gives me time to refactor, write extra code, surf SO, grab an hour lunch, etc.
Just a simple comment "Repetition is the mother of learning", it doesn't matter if you have a good/bad memory. The function you use more in your programs you will remember the best. Also, in my case, i have the internet, when i don't remenber something i just ask, even if it is a bumd or easy question, and a lot of times I remember the answer after I post the question and then I quickly post the answer. The problem is how much time you put your mind to work....
:)
I think it depends. Memory for a programmer is very very important. Both short and long term. However, what you use that memory for is the important thing. As a programmer, if you're using it to memorize ever nuance of an API then I'd say you're wasting your memory.
Ultimately, I try to use my memory to remember the important things and anything that I can't easily find at a later point in time. I'll usually put API stuff in short term memory and use google and intellisense to help me with the specifics. Design patterns, methodologies, lessons learned from experience, on the other hand are usually what I try to put into long term memory so I can use it effectively in the future without having to relearn everything.
In short, yes a programmer needs a good memory...both long and short term. But they need to be wise in how to use that memory...and that, I think, makes the difference in a great programmer.
Having a strong enough memory to hold the things you need to use today is the important part. If you are constantly searching for answers to the same question, you probably have a weak memory.
The most important thing is remembering where you can find the answers. I will sometimes blog on topics that are a bit more complex so I have a place to find them when I need them. But I don't try to hold onto them perpetually, because I can search my own blog and find them later. I do the same with other people's blogs and I know which blogs to hit for certain types of answers.
When all else fails, Google it!
I used to think having a good memory was a time saver because the more you remembered, the less time you spent looking things up, but tools and IDE have got so good now, many things that I used to memorize like syntax and various code snippets are quickly available in a few keystrokes. That, and the fact that the amount of information in the field grows way faster than any mortal programmer can keep up with, makes me think memory is less important any more. More important, is having good access and organization to useful information.

Resources