Related
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why is the Lisp community so fragmented?
Despite the snarky tone, I'm actually looking for a serious answer.
I know the textbook response: Lisp is a model for computation, not a "language" per se. Still, why exactly are there so many different dialects of Lisp?
Presumably it isn't because of surface syntax issues or crucial missing features, the way it is with so many other languages. But if not that, then what?
Do they interpret that model of computation in slightly different ways? Are they pursuing different simplicity versus efficiency tradeoffs? Is it because of limitations in different compiler/interpreter codebases? Or something else entirely that non-Lispers like myself can't even imagine?
I suppose the followup question would be: if the differences matter, which is the best modern Lisp for real-world usage?
Thanks,
Dr. Ernie
There are a number of reasons for the many dialects of Lisp, some historical, some technical, and some mostly psychological.
Historical: By classical standards, Lisp was fairly slow and used lots of memory. Quite a few people have devised various techniques (or corruptions, if you don't like them) to try to make it more practical. This was especially true when Lisp machines were being built -- the hardware was devised specifically to run Lisp, and at the same time, the Lisp they ran was devised (revised?) specifically to run on that hardware and to take full advantage of its capabilities.
Technical: Some decisions that have been made at times in Lisp were questionable (to put it nicely). For example, all modern Lisps uses lexical scoping, but quite a few early ones used dynamic scoping. Some Scheme users don't think much of the non-hygienic macros in most other Lisp dialects.
Psychological: Lisp is so simple that many people have felt qualified to write their own implementations. Many Lisp programmers are also fond of experimentation and pursuing perfection, so many of those implementations included the implementors idea of improvements of various (usually incompatible) kinds. Nobody was coordinating efforts so many of those extensions/changes were incompatible with each other in various ways, so each became a (more or less) distinct dialect. Some of this was probably avoidable, but some of it wasn't -- just for example, two people might see a particular feature as flawed. One would work at improving it to something he found more acceptable, while another removed it completely, and either considered that an improvement in itself, or possibly devised something completely different to replace it.
Poor communication also often played a role. Somebody at (say) MIT might go somewhere on sabbatical, and take along a tape of some Lisp implementation, which would start to be used wherever they went. That would often (quite unintentionally) fork the implementation, as the two schools did work independently in parallel.
because it's a mudball, it gets picked up and shaped but still remains what it was
Common Lisp is a standard, but there are many other implementations. It shares that trait with many other languages like C, C++, Fortran, Ada, etc. If you look around, you will find that there are many implementations of all these languages, with slightly different flags, options, and support for the corner cases.
Many other languages that are common today were not standards (at least to begin with) and included one conical implementation/compiler/interpreter. I am thinking of languages like Perl, Python, Java, .NET, Ruby, etc. There may be some off shoots of these languages, and ports to new platforms...but overall the syntax and usage of the language is always referenced to the one true implementation.
I use the GNU CLISP for my work...I chose it because it is free, available for the platforms I am interested in, reasonably well documented, and appears to be robust, mature, and complete (at least in terms of the ANSI Lisp Standard). You may have very different requirements for your Lisp environment, and that may lead you to a different choice for your implementation.
I'm familiar with some mnemonic/memorization techniques for about a year.
I think that this techniques can give a developer significant benefit or even make you an expert in the field.
If you are familiar with this techniques, you know that there are mnemonic techniques for long-term memorizing. We often read lots of books, and there are many concepts which you don't remember because they won't appear often in your daily coding-life. So, you need to learn it again and again, months and years later.
The same situation with frameworks. It takes some time to become familiar with framework's syntax, useful code constructs and so on. But after some time you forget many concepts from your previous framework(or framework which you rarely use - but it is very important to you).
By using this techniques you can build with time your sustainable knowledge base, which will reliably grow - you can be confident that after some time you won't forget about the concepts you learned earlier.
Please tell me what do you think about this idea?
You are already familiar with Mnemonics techniques, please tell about your experience - it will be very useful and interesting to hear.
Useful links:
Method of loci
Mnemonic
My favorite method:
Type it into Google
I'm being totally serious - why do you need to remember it?
You don't memorize how to be a good programmer any more than you memorize how to be a good classical violinist. You practice, practice, practice. That will let you naturally recall the most important constructs, and as Chad says, Google is there for the less important ones. I have never felt the need to use mnemonic devices or rote memorization to learn a programming construct or technique.
"Expertise in the field" isn't about memorizing function calls. It's about the ability to break problems down, and provide performant, maintainable, reliable solutions in minimal time.
You could memorize every function call in the STL, and still be a complete neophyte programmer.
I read Harry Lorrayne's "The Memory Book" a few years ago, and found that the techniques therein were great for remembering related facts. However, in my experience I the techniques could have been more useful, namely:
The memorization didn't tend to work in the long run. If I wasn't practicing remembering a particular list, or body of facts, I would eventually completely forget them within a few days or weeks.
I had trouble applying the techniques to hierarchical data sets, like class libraries. This made their use less powerful for programming stuff.
The techniques were very useful for things that could be easily explained by voice, or a single stream of text. However, I had trouble applying them to things of a more visual nature, such as mathematical equations.
That said, I have used Mnumonic Techniques while coding for things that google could not replace. I sometimes use the number memorization trick to recall a specific line of code (by its line number) while I jump around a code file, or remember function names as I jump between files.
Agree with other answers, some of the more useful things you could focus on improving are:
Troubleshoot a problem, using the 'elimination' technique, basically eliminating problem areas, one by one, until you hit the right one
Quickly get to the resource/API/Information I need - Use Google, SO, CodePlex, Google code, Koders.com codesearch, Google codesearch, MSDN etc - Knowing what information lies where is enough to save time drastically
Avoid thrashing (stuck with a problem for too long, no results), once you've spent enough time on the problem, by giving others 'complete' and 'relevant' information on your problem you can help others help you
Finally, memorizing about theories in programming is not helpful, however just reading, listening to experts and podcasts, attending conferences can help great deal in 'access to information from memory'
HTH
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I've been browsing Bjarne Stroustrup's new introductory programming book, Programming: Principles and Practice Using C++. It's meant for first-year university computer science and engineering students.
Early on in the book he works through an interesting extended example of creating a desktop calculator where he ends up implementing an arithmetic expression evaluator (that takes bracketed expressions and operator precedence into account) in a series of co-recursive functions based on a grammar.
This is a very interesting example, although arguably on the complex side for a lot of beginners.
I wonder what others thing of this particular example: would learning programming by seeing how to implement an expression parser excite and motivate you, or would it discourage you because of all the details and complexity?
Are there other good "real" programming examples for beginners?
When I was first learning to program, the best example I ever worked with was building a text adventure game from scratch. The basics just required knowing how to display text on the screen, receive input from the keyboard, and rudimentary flow control. But since text adventures always have room to add more features/puzzles/whatever, they can be easily adapted to explore aspects of whichever language you're learning.
Of course, not everyone finds games more interesting than calculators. It really depends on the programmer.
First, let me say that cognitive psychologists have proven in numerous studies that the most important factor in learning is desire to know.
If you want to learn about programming, you need to find a domain that stokes your desire to understand. Find a challenge that can be solved with programming.
I agree with the other folks when they suggest something that you are interested in. And games seem to be a common thread. As I reflect on my experience learning to program (too many years ago), math problems and a simple game was involved.
However, I don't think I really understood the power of software until I created a useful small program that helped a business person solve a real problem. There was a tremendous amount of motivation for me because I had a "client". I wasn't getting paid, but the client needed this program. There was sincere pain (gotta get my job done quicker) related to this situation.
So my advice is to talk to people you know and ask what small annoyance or computer-related obstacle to they have. Then try to fix it. It may be a simple web widget that reduces repetitive, manual tasks for an office worker.
One of my best early works was helping a little printing shop (no software, circa 1985) that struggled with estimating jobs to produce proposals that weren't money-losers. I asked alot of questions of the sales lady and of the operations manager. There was obviously an intersection of a common pain point with a really easy calculation that I could automate. It took me a couple of days to learn Lotus 1-2-3 (spreadsheet for you young-uns) enough to write a few macros. I was motivated. I had passion. I saw where I could make a difference. And that, more than anything else, drove me to learn some simple programming.
Having real people, real problems, and really simple solutions could be the inspiration you need as a beginning programmer. Don't try to write an accounting system. Just take one discreet piece of someone's frustration away. You can build on that success.
So, I wouldn't focus on the technique (yet). Don't worry about, "Am I doing this the most efficient way?" The main objective for a beginner is to have success, no matter how small, and build confidence.
BTW, that Lotus 1-2-3 set of macros grew into a full job tracking system. Very archaic, limited features, but made that little print shop much more profitable.
Create your motivation, fuel your desire, and develop your passion for programming like an artist unveils the masterpiece in a blob of clay. And be persistent. Don't give up when challenged with a roadblock. We all get stumped sometimes. Those are some of the best learning moments because humans learn more from failure than success.
Good luck.
I think making tiny games like text version of Tetris will be a good way of getting into pragramming world.
Board games are fun to design and code since they come in many shapes and difficulties
from tic-tac-toe to checkers to monopoly, its reinventing the wheel for educational purposes!
the best advice i can think of is to pick something from a field of interest you have because coding for the sake of coding might dim your resolve
Start small. Do examples that interest you. Stretch yourself just a little every time. Get comfortable with each step, to the point that you have confidence that you know what you're doing, and then try something a little harder the next time.
I think that any example program would help you learn a new language, but a beginner should try to work with something that is easy to understand in the real world, such as a mortgage calculator or something along those lines.
I think the answer is that it would depend on the person who is learning how to program.
One nice thing about something like an arithmetic expression evaluator is that it is a project that can start very small (make it work with just the format "X SYMBOL Y" where X and Y are single-digit numbers and SYMBOL must be a plus sign) and then you are slowly expanding the functionality to the point of a complicated system.
However, it might not be a great starter project for someone who doesn't really understand the concept of computers (hard disk, memory, etc.)
Try to think of something that you do on a computer that is repetitive, and could be easily automated. Then try to come up with how to make a program that automates that task for you. It can be anything, whether it's popping up a reminder every 15 minutes to stretch your legs or cleans up your temp directory on a regular basis.
The problem with this task is that it's conplex and not real life related. I don't need another calculator.
But once I had a CD with scratched surface near its center and lots of valuable JPEG files inside. I dumped the data from the unscratched part of the disk but all the filesystem was surely lost. So I wrote a program which analysed the dump and separated it into files. It was not very simple but was a nice and exciting file IO programming exercise.
Examples can be more complex than something you try to write yourself. It's easier to follow someone else doing something than it is to do it yourself. A real-world example like this calculator may be a fine way to introduce someone to a language. For instance, Practical Common Lisp starts with an example of an in-memory database (for CDs I think) and uses that as the springboard to explore parts of the language.
I prefer seeing a real example built up over time than just a lot of simple "Hello World" programs.
I've always found that implementing a game of some sort is sufficient incentive to learn various features of a language. Card games, especially, because they generally have simple rule sets to implement, but are sufficiently complex from an abstract point of view.
I would agree, though, with everyone else: find examples of things that interest you. Not everyone is a game fan, but something like a mortgage calculator would be far more interesting.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Do you think having a great memory is REQUIRED to be a great programmer?
I don't consider myself a great programmer but I do think I am decent. But my memory is REALLY bad so I find myself always having to remind myself how to do things. I mean I "know where to look" but sometimes it makes me feel like I am just a crappy programmer. What makes it even worse is that I am always forgetting where things are in my source code or what algorithm I used for certain situations.
Think back on the great programmers you have encountered in your life, didn't all of them seem to have amazing memories?
Surely apocrapful, but here's Einstein's number:
A reporter interviewed Albert
Einstein. At the end of the interview,
the reporter asked if he could have
Einstein's phone number so he could
call if he had further questions.
“Certainly” replied Einstein. He
picked up the phone directory and
looked up his phone number, then wrote
it on a slip of paper and handed it to
the reporter.
Dumbfounded, the reporter said, "You
are considered to be the smartest man
in the world and you can't remember
your own phone number?”
Einstein replied, “Why should I
memorize something when I know where
to find it?”
I have this coworker that writes really bad code that is incredibly hard to maintain. I've come to the conclusion that his problem is good memory. He's simply able to remember where he put what functionality. Therefore he doesn't have to write code that is self-explanatory. He simply remembers that crap. The rest of us have a really hard time figuring out his code.
I'm sure that good memory isn't that guys only problem. But I'm sure his code would improve if his memory got worse.
Treat your short term memory as stack (not static) and don't expect much more from it. I've come back to code that I wrote only a month ago and its almost like someone else wrote it .. it just takes a while to get back in the same zone.
I get teased, often for leaving comments for myself like breadcrumbs .. but it works. If I finish some function and say "AHA, that is absolutely BRILLIANT!", I immediately comment my complexity as I'm sure to forget.
So now, to answer a question with two questions:
What did you have for lunch last Wednesday?
What is the purpose for 'counter' in hash_foo() ?
At least, with #2, you can quickly go back and look / remember.
As long as you can remember how g-o-o-g-l-e is spelt, you're fine. :)
But seriously, you do need to keep several things in yoru short term memory at once. Longer term memory is I think less important. As long as you're aware that something exists, you've seen something before, etc then when it becomes relevant you'll know you can dig it up.
Experienced programmers can generally regurgitate APIs, minor details and so forth but in my experience this has never been a case of sitting down and memorizing things by rote. It's a natural consequence of using things again and again.
I would say the opposite, having a good memory may lead to writing code which only the author can understand, because she remembers the details of its logic. On the other hand I, having bad memory, document my code and write it as clear as possible.
Honestly, I've found poor memory to be an asset, even poor short term memory. Poor short term memory really forces you to break out the separation of concerns. The end result is very clean, very simple, very well encapsulated code. I actually have pretty good short term memory, but I've learned to try really hard not to employ it after a few experiences writing code while I was distracted enough that I couldn't really retain much at once. I was actually shocked to observe that the code was actually far cleaner than code I'd developed im the past.
Poor long term memory is an asset, because you end up training yourself very well on how to find and learn techniques, API's, algorithm's, etc. It also tends to encourage you to find a small set of common themes to guide you in your work.
All in all, the mark of a good programmer isn't complexity (which is really difficult to achieve without good memory), but simplicity (which by nature doesn't require much memory).
I can answer exactly this using just one word: NO. Having great memory to memorize all about programming is not a must. Experiences and the tedious learning by practice are the best.
I also have experienced this. If you have enough experience hours (or can be years) creating softwares with best practices applied, then you're a truly master on your own job or on programming languages you use to create softwares. Please don't be sad if you have bad memories, but striving to always learn and practice can defeat your memory weakness.
To me, there are two kinds of programmers in the world. The first were born to do it, the second learnt. In both groups they range from unbelievably poor to unbelievably great. Does memory denote those ratings? No, absolutely not. While a good memory can help you with learning, nothing helps you more than practice and understanding. After all, being able to remember the entire Encyclopaedia Brittanica means nothing without understanding. My server's storage is a classic example there.
Programming is about logic, both in the code and how you approach the problem. If you want clear, easy to understand code then chances are you'll break the problem into small manageable chunks (i.e. that fit in your head in their entirity) and work on each one. Each function then condenses down into a single command for your next stage of complexity. At the end of that next stage, if there is another, you'll have a set of single commands again to build on. Logical naming, logical partitioning, logical assembly... I think I'm getting my logical point across ;)
My memory is appalling, and I mean appalling. I can be introduced to 3 people and by the time #3's name is said I've forgotten who #1 is. I can still write some good code, not everytime or everyday; when you're in the zone it's something else, at that point it is art. So, put your memory to one side, get yourself either a really quiet space or a pink noise generator and dive in. The only thing that's going to make you a better programmer is practice, practice, practice. The only thing to remember is that programming is a skill and skills are practiced, and best practiced among friends who can give constructive criticism and advice... like Stack Overflow :)
Apologies for the tome level of this answer but I couldn't remember what I'd already written ;)
Having a good memory is quite useful but certainly not required. I would say that it's not that great programmers have a great memory but rather, they have spent a lot of time investigating even the littlest issues which improved their understanding and improves recall. If you spend 4 minutes resolving a problem (Googling or asking in SO) then you probably won't remember the resolution when you hit it again 4 months down the line. IT could be an evolutionary trait or just a bad memory =)
Good programmers also have well thought out principles which allows them to work on auto-pilot without second-guessing themselves. A good set of principles also achieves consistency and predictability (which is a quality of memory) through reinforcement.
This also extends into other domains. Chess grandmasters can recall an entire game played 40 years ago. That's because they remember patterns (openings, variations, root cause and effect of moves which led to the end game, etc). which helps group individual moves into units.
In software, tools can like auto-complete or having a KB/Wiki and searchable check-in history etc can help.
No. But maybe it can make you great...
An art of programming (maybe the art) is being able to approach problems in such a way that you can grasp the whole of them, despite your limitations (such as imperfect memory). This is because everyone - including the smartest of us - has limitations. Bumping into your limitations is not a sign that you have limitations, but a sign that you are reaching further.
This art (insofar as I know it) includes things like divide and conquer (using modules of various kinds, to match the shape of the problem); using standard techniques to telegraph your intention (idioms, OO Design Patterns are just one); separating out the core of the problem (this one is not about the code: it's about the problem); and of course comments.
I used to believe that good code was self-documenting (and even, that code is truth), but recently I'm writing parsers, and including the CFG in a comment is a very helpful reference, because it is a much simpler representation of the intention of the code.
A coder's gotta know their limitations. It's unrealistic to expect to have the same grasp of something months later, as when you were in the thick of it. All the above involve accepting that problem, and working on a solution. Not only does it make your code easier for you to grasp later on, it makes it easier for someone else to grasp... but most importantly I firmly believe that clearer and simpler code is fundamentally, and transcendentally, better code.
The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. - Dijkstra
In allusion to Edsger Dijkstra, a competent programmer is fully aware of the limited size of his own skull. The more details you don't cram in your head, the better you can tackle the problem at hand.
Modularize your code very much, refactor code, package your nifty algorithms to objects, and use those objects, in that way, you don't always have to "micro-remember" each and every implementation details of your programs.
No. The ability to forget about what you know and continue learning is at least equally important in the long term.
Good notes and bookmarks and web searches go a long way.
Remembering the really simple things is required for great programming. Things as simple as "keep at it".
An interesting perspective from the other side of your monitor: Locality of Reference
I think one benefit to having a good memory (modesty point: I have a good memory) is the ability to be able to think on your feet when not actually in front of a computer.
For example, you might be in a meeting when some new kind of functionality for your app is suggested. Can it be done? How long is it likely to take? These are questions which are easier to answer if you can pretty much walk through 250k loc in your head.
That said, I find a grain of truth in others opinions that my own code might be less clear because I can remember it better.
Some of the best-written code I've ever seen was written in such a way that each design decision was inevitable, and the code read as its own explanation. That's far better IMHO than code which requires the reader (or, worse, the maintainer) to keep tons of arbitrary detail memorized.
My own index of complexity in code is "How much stuff do I have to keep in mind to understand this one line?"
More is worse.
I think having good memory is helpful for learning new things quickly.
This doesn't mean it is a requirement for being a great programmer.
In fact it is more about intelligence rather than memory capabilities, but it's too much of a complex subject to be able to identify certain qualities and compare them with programming skills, and be able to retrieve any relevant info.
That is the mystery of the brain.
Occam's Razor suggests that a simpler theory is likelier to be true.
If code is a theory, describing inputs going to outputs, then shorter code, using expected idioms and libraries, is more likely to be "true" - that is, it is more likely to capture the essence of the solution, so it will generalize to inputs that you didn't expect.
Shorter, unsurprising code is easier to remember.
It all depends on what your good memory remembers..
I've worked on the same project for 10 years or so and I can't remember every line and who wrote it and why.
But... I can remember pretty much all the user requests and user issues. Who wanted what and when.
I can remember pretty much all the support issues we have had.
Finding old code is easy - we have great tools for that. Finding old issues is a much more abstract process: we have JIRA and Wikis but sometimes they don't cut it because they fail to provide the semantic meaning.
So. Pay attention to what REALLY matters and remember that.
Programmers with poor memory are like Universal Turing machines compared to practical computers: technically you can accomplish the same things by referring to information you or someone else has recorded somewhere... it's just that it may take a little longer....
I think it's possible to be able remember different types of things with differing degrees of aptitude.
For example, I sometimes find I have a pretty bad memory when it comes to random facts and figures, as well as things that I've done or will be doing - the latter meaning I find bug-tracking software an invaluable tool.
On the other hand, I can remember the structure of complex pieces of software I've written, and where to find specific things within that.
This may be about logical association. Well-designed software should (in theory) have a logical structure, which may make it easier to store in your memory if your brain is wired up that way.
Random piece of information, however, may not have these associations, making them harder to remember.
I would say its necessary for being great and fast. My memory for programming details is OK (but I have google for that). However, when I sit in front of applications that I've primarily written (~30-40 k lines of code) I'm able to load its structure almost completely into my memory. I can find the way I'm doing something in a couple of seconds and recall why I implemented it the way I did. That's invaluable. By 11 am I've been able to do more work than some others do all day. Now, that doesn't make me a great programmer, but it does make me an enormously productive productive programmer. This gives me time to refactor, write extra code, surf SO, grab an hour lunch, etc.
Just a simple comment "Repetition is the mother of learning", it doesn't matter if you have a good/bad memory. The function you use more in your programs you will remember the best. Also, in my case, i have the internet, when i don't remenber something i just ask, even if it is a bumd or easy question, and a lot of times I remember the answer after I post the question and then I quickly post the answer. The problem is how much time you put your mind to work....
:)
I think it depends. Memory for a programmer is very very important. Both short and long term. However, what you use that memory for is the important thing. As a programmer, if you're using it to memorize ever nuance of an API then I'd say you're wasting your memory.
Ultimately, I try to use my memory to remember the important things and anything that I can't easily find at a later point in time. I'll usually put API stuff in short term memory and use google and intellisense to help me with the specifics. Design patterns, methodologies, lessons learned from experience, on the other hand are usually what I try to put into long term memory so I can use it effectively in the future without having to relearn everything.
In short, yes a programmer needs a good memory...both long and short term. But they need to be wise in how to use that memory...and that, I think, makes the difference in a great programmer.
Having a strong enough memory to hold the things you need to use today is the important part. If you are constantly searching for answers to the same question, you probably have a weak memory.
The most important thing is remembering where you can find the answers. I will sometimes blog on topics that are a bit more complex so I have a place to find them when I need them. But I don't try to hold onto them perpetually, because I can search my own blog and find them later. I do the same with other people's blogs and I know which blogs to hit for certain types of answers.
When all else fails, Google it!
I used to think having a good memory was a time saver because the more you remembered, the less time you spent looking things up, but tools and IDE have got so good now, many things that I used to memorize like syntax and various code snippets are quickly available in a few keystrokes. That, and the fact that the amount of information in the field grows way faster than any mortal programmer can keep up with, makes me think memory is less important any more. More important, is having good access and organization to useful information.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
From day 1 of my programming career, I started with object-oriented programming. However, I'm interested in learning other paradigms (something which I've said here on SO a number of times is a good thing, but I haven't had the time to do). I think I'm not only ready, but have the time, so I'll be starting functional programming with F#.
However, I'm not sure how to structure much less design applications. I'm used to the one-class-per-file and class-noun/function-verb ideas in OO programming. How do you design and structure functional applications?
Read the SICP.
Also, there is a PDF Version available.
You might want to check out a recent blog entry of mine: How does functional programming affect the structure of your code?
At a high level, an OO design methodology is still quite useful for structuring an F# program, but you'll find this breaking down (more exceptions to the rule) as you get down to lower levels. At a physical level, "one class per file" will not work in all cases, as mutually recursive types need to be defined in the same file (type Class1 = ... and Class2 = ...), and a bit of your code may reside in "free" functions not bound to a particular class (this is what F# "module"s are good for). The file-ordering constraints in F# will also force you to think critically about the dependencies among types in your program; this is a double-edged sword, as it may take more work/thought to untangle high-level dependencies, but will yield programs that are organized in a way that always makes them approachable (as the most primitive entities always come first and you can always read a program from 'top to bottom' and have new things introduced one-by-one, rather than just start looking a directory full of files of code and not know 'where to start').
How to Design Programs is all about this (at tiresome length, using Scheme instead of F#, but the principles carry over). Briefly, your code mirrors your datatypes; this idea goes back to old-fashioned "structured programming", only functional programming is more explicit about it, and with fancier datatypes.
Given that modern functional languages (i.e. not lisps) by default use early-bound polymorphic functions (efficiently), and that object-orientation is just a particular way of arranging to have polymorphic functions, it's not really very different, if you know how to design properly encapsulated classes.
Lisps use late-binding to achieve a similar effect. To be honest, there's not much difference, except that you don't explictly declare the structure of types.
If you've programmed extensively with C++ template functions, then you probably have an idea already.
In any case, the answer is small "classes" and instead of modifying internal state, you have to return a new version with different state.
F# provides the conventional OO approachs for large-scale structured programming (e.g. interfaces) and does not attempt to provide the experimental approaches pioneered in languages like OCaml (e.g. functors).
Consequently, the large-scale structuring of F# programs is essentially the same as that of C# programs.
Functional programming is a different paradigm for sure. Perhaps the easiest way to wrap your head around it is to insist that the design be laid out using a flow chart. Each function is distinct, no inheritance, no polymorphism, distinct. The data is passed around from function to function to make deletions, updates, insertion, and create new data.
On structuring functional programs:
While OO languages structure the code with classes, functional languages structure it with modules. Objects contain state and methods, modules contain data types and functions. In both cases the structural units group data types together with related behavior. Both paradigms have tools for building and enforcing abstraction barriers.
I would highly recommend picking a functional programming language you are comfortable with (F#, OCaml, Haskell, or Scheme) and taking a long look at how its standard library is structured.
Compare, for example, the OCaml Stack module with System.Collections.Generic.Stack from .NET or a similar collection in Java.
It is all about pure functions and how to compose them to build larger abstractions. This is actually a hard problem for which a robust mathematical background is needed. Luckily, there are several patterns with deep formal and practical research available. On Functional and Reactive Domain Modeling Debasish Ghosh explores this topic further and puts together several practical scenarios applying pure functional patterns:
Functional and Reactive Domain Modeling teaches you how to think of
the domain model in terms of pure functions and how to compose them to
build larger abstractions. You will start with the basics of
functional programming and gradually progress to the advanced concepts
and patterns that you need to know to implement complex domain models.
The book demonstrates how advanced FP patterns like algebraic data
types, typeclass based design, and isolation of side-effects can make
your model compose for readability and verifiability.