Did you feel learning to program with turtle graphics was useful? [closed] - turtle-graphics

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm preparing to teach someone to program. When I learned the course material, I used turtle graphics for the first few exercises. In reading introductory textbooks, I have not found one that uses the technique. Did others find this approach helpful? If not, what is a better way to learn to program?

I think it depends on age of the target group.
If they are children (I would say up to 12-14 years), doing any easy graphics is a good way to motivate them; on the other hand, don't expect them to learn much about real programming or algorithms.
If they are teens (14-18), it's perhaps still good to use some algorithms that give pretty results (for example 3D or fractals), but since they are older and capable of more abstract thinking, I don't think 2D turtle graphics is interesting enough.
If they are older, doing any graphics is a distraction. At that age, they should have enough inner motivation to learn without anything fancy.
To sum up, I think that fancy graphics serves more motivational role (that you see what you did fast, and it's easy to show others what can you do with a computer) than learning role (that it would make learning real programming easier).

In the late 80s, before I was programming in C, I was programming in Applesoft BASIC and Logo. As a child I thought the turtle was great because it make programming simple. If I decide to teach my children Logo I will probably start here to get an actively developed Logo interpreter.

The key thing about LOGO is user-defined functions. It is very good at conveying that, as long as you emphasize it. Show interactively how to draw a square, then make a new word called square. Then show how you can draw patterns using square. Then make those patterns into words, and so on.

You could do worse in teaching programming than using a tool like Scratch. It's a drag and drop programming interface and can be used to teach basic concepts of programming with some fun visual results (as can be seen from the gallery on their website).
Rob

Logo gave me a very clear picture (no pun intended) on how recursive functions would work, and since I was doing assembly programming at the time, the need to return to the previous state when returning to a method became very clear with Logo.
Recursive implementations of things where also very easy to see the effect of.

I wrote script/code in a c-like dialect for a game called Doom2 before I knew what programming was, so when it came to seriously learning about concepts such as pointers, inheritance and polymorphism I found the basics a breeze because I could construct a mental model to not only help me understand, but also appreciate how cool things like pointers and arrays are.
A friend of mine is a good programming student, but he gets frustrated when he can't visualize an algorithm working, when I was starting to help other students I found they had the same problem, if they can't see something working it's harder to appreciate as a fledgling programmer, the same friend eloquently suggested I "Show 'em some crazy pimp shit and then show them how it's done". He's right, even if someone really wants to learn something they'll be able to draw on more mental energy if they think what they're learning lets them do awesome things.
My best bit of advice is this: AT THE START SPEND AS LITTLE TIME PROGRAMMING TO THE CONSOLE AS POSSIBLE
It makes you feel constrained and your efforts appear futile, only after you appreciate it as a front end should it be used for learning to program. I wouldn't use logo myself because I don't think it can teach concepts such as the aforementioned polymorphism or inheritance nearly as well as other methods, I know a friend of mine is teaching a teenager how to program using XNA in a wrapper, I think anything that can let you blit an image to the screen is fine. That way you can see why you'd want an abstract base class called EnemyEntity with behavior that's inherited by zombie and dog etc. It's not that the concepts are hard to understand, it's just that at first they're hard to appreciate.
I could go on but I think that puts across what I've learned by teaching others. I think using graphics in teaching programming allows students to gain the ability to build mental models of intangible concepts faster than any other.
XNA If you want to teach C# that's an amazing graphics library, just write a wrapper sprite class to hide as much complexity when first starting out and teaching concepts.
SDL A lower level library if you're going to start with c++

During one of my first-year computer science papers we used Java to create fractal patterns via a turtle object.
It was pretty fun to see visually whether or not we had correctly implemented the algorithm required to produce a certain pattern. However, so answer the main question, I wouldn't say that programming via a turtle is useful. I'd say the best way to teach someone to program is to get them to build their own app to do whatever they want it to do. This gives them creative control, plus if they get stuck they can learn how to resolve a problem.

I strongly suggest to start with a interpreted language like Logo (not compiled) because of the quality of the error messages. Reading error messages is very important in this process. Also, at the easy level, Logo allows you to run your instructions one by one in direct mode and carry them to your procedures when you get the expected results.
# Alex: MicroWorlds is a commercial version of Logo and it does exist in English, Spanish, Portuguese, Italian, Russian, etc. it's a big plus if you are not a native English-speaking person.

LOGO is not only Turtle-Graphics.
There are also other interesting concepts in it which come from LISP.
'Turtle' is just icing on the cake and the "imperative" side of Logo.
:)

I learned to program in BASIC by writing simple programs drawing faces (I mean circles and squares) on the screen. Somehow the whole turtle programming was never my thing, although a few of my friends learned that way. Later on I moved to Pascal, then to Delphi, Java and C++/C#.
In my opinion the trick is to "wow" your student and impress/empower with potential things that you can accomplish by writing your own programs. I would actually demonstrate some GUI programming or game programming. It's much easier to learn the basics by keeping the end goal in mind.
Recently I came across SmallBasic - a cool programming environment for kids designed to teach concepts. I would give that a try. It comes with a pretty complete paper describing how to use it.

When I got my first computer (VIC-20) and started programming it was very hard to explain to my parents what I was doing.
My mother tok a course in computing preparing for a project of computerizing the library she worked in. They had a couple of classes introducing them to programming. After learning LOGO she came home and said that she suddenly understood what I was into.
So LOGO with turtle graphics brought us closer together!

I did a "computing for kids" course in the late eighties, and there was an extensive section on turtle graphics using logo. In all honesty I was bored to tears, and learned virtually nothing from it.
I think "programming the turtle" might work better for someone who is artistically inclined, or hugely into geometry, but by and large, there are far more interesting problems to attack, even for kids.

Ah, the memories of good old Logo. I think I got more of a geometry lesson than a programming lesson out of it, e.g. figuring out how much to turn at various points to produce a particular shape, design or pattern. It may work if you plan on mixing geometry with the programming, but if the person doesn't have the basics of geometry, e.g. what is a square and how is it different from other 4-sided shapes, what is a triangle, etc.

I used logo and turtle at school too, a great introduction.
It looks like our kids will be getting a slightly updated interface with Microsoft Kodu. It looks very impressive. It's an icon based programming language made for creating games that runs on X-Box Live.

I'm currently learning python and using a little bit of turtle. In labs we haven't used it, but our homework does. It's nice to know it exists, and it's a good way to get certain commands and syntax in. Overall I don't feel it was completely necessary though.

When I was young, I found it very interesting. It was one of the first programming languages that I've learned, even though I've used it for about two days. It started my interest in programming.
Nowadays, I think the syntax is a bit unclear because most statements are abbreviations. Nowadays, computers are far more powerful thus the language could profit from clearer statement. Another factor is the native language of the person who is learning to use it. If English is not your native language then Logo becomes a bit more complex to understand. So if you're teaching Logo to children, make sure they're familiar with English terms first. (Quite easy if you're a native English-speaking person. More complex if you're originally Dutch, German, French, Portuguese. Even more complex if you're Russian or Chinese because you'd have to adjust to a different character set too.)

I have just begun teaching my 7-year-old how to program using Logo, and he is having a load of fun with it. The commands are easy enough for his limited reading ability and he just loves drawing cool pictures using the turtle graphics. I was amazed at how well he retained what he had learned using it, so I feel it was a good choice for his age.
For older kids (or adults) other languages might have more advantages as a beginner language though

Personal experience, YMMV...
My first encounter with a computer was turtle graphics in my early teens. I loved and was immediately hooked. (Perhaps because for the first time someone [something] did exactly what I told it to do?)
The visual and instant feedback made me want to do more and more. I really wanted to figure out how to replicate the pictures I saw in the book I was using. Without me even classifying it as "work", it slowly built up my early programming skills and my confidence I could learn on my own.
I credit it with sending me in the path I'm in today, a happy software developer who can't believe I get paid to do this work (I know, I know - all corporate snickering aside, I like my work).
As I said, YMMV.

Related

Path to Become a Better F# Programmer [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to hear from you folks that have achieved a high-level of proficiency in F# (and also in functional programming in general too) what should be my steps from now on to become a better/professional F# programmer?
I already know much of the F# syntax and have some years of experience with C++. My goal is, as an engineer and mathematician, to design better scientific libraries (linear algebra packages, partial differential solvers, etc.).
what should be my steps from now on to
become a better/professional F#
programmer?
Keep coding everyday :)
I jumped on the F# train in Sept '07, before that I had a boatload of C# experience. It took about 3 months or so for me to stop writing code as C# with a little funnier syntax and start picking up on the right coding style :)
Tips and things that helped me:
I found that F# makes it really hard to write non-idiomatic code, really easy to write good, clean code. If you find yourself fighting the compiler, 9 times out of 10 your doing something wrong. Go back to the drawing board and try again.
The entire concept of immutability was a mystery at first, but implementing all of the data structures from Okasaki's Purely Functional Data Structures was hugely helpful.
During some slow days at work, between '08 and '09, I wrote a wikibook. I haven't looked at it in a while, but I'm sure its really bad --- but, the experience of explaining the language to others was a good jumpstart for someone like me who normally wouldn't have enough motivation to start a pet project in F# :)
Map, Fold, and Filter are your friends. Try to express algorithms in these functions rather than implementing a loop with recursion.
Non-tail recursive functions are almost always easier to read and write. See here.
Project Euler. Lots of people recommend it, I didn't find it particularly helpful at all. You might get more use from it than me if your a mathematician, however.
<3 unions! Use them!
Falling back on mutable state -- big no-no. At least for beginners. The worst beginner code is full of mutables and ref variables. Immutability is an alien concept at first, so I recommend writing fully stateless programs for a while.
Still, the best advice is just keep coding everyday.
Hope that helps!
-- Juliet
Like any other language now that you know the syntax and the basics it's time to write code and more code.
Make sure you have the core concepts of functional programming down.
Work on a large project so you also get familiar with the large and not just the small.
Write a few immutable data structures.
Work on a large project without using inheritance.
If your familiar with design patterns implement some of them in a purely functional way and notice how some of them disappear.
F# is about mixing functional and OOP styles. Once you've done a fare amount of abstraction without inheritance bring it back and start mixing the styles together. Find a balance.
Since your goal as an engineer and mathematician is to design better scientific libraries may I suggest as a learning exercise working on a video game style simulation. Something that involves physics and math but also requires control of state.
I can only agree that trying to explain functional programming to others is a great way to learn it. I spent a lot of time about thinking the structure of my F# book and I think it really helped me to understand how functional concepts relate. Even giving a talk on F# in your company or to your friends should have a similar effect.
When I started learning F#, I started working on the F# WebTools project. I think this was quite useful, because many components of the project were perfect for functional programming, so I learned many functional tricks (because they were the best way to solve the problem). The project processed source code tree of F# and translated it to JavaScript, so I was using a lots of recursive functions and discriminated unions.
The area you're working in is quite different than my, so I cannot give you any specific advice, but it is a good idea to write programs in a clear functional way - even if you think that it would look nicer if you wrote it in the C++ style. When you write it, you'll probably find some way to simplify your code.
So, I think that the tips I could give are:
Try to explain F# to others - this helps you to organize ideas in your mind
Pick good problems to start with - e.g. algorithmic problems, processing tree structures, etc.
Write as much F# as you can and don't be afraid to start with a solution that does not look perfect to you - I rewrote my first program so many times!
you very probably don't want to hear from me as I have not achieved a high level of proficiency with F#, but I have set myself the goal of getting there (much like your self).
I thought I might suggest what has turned out to be my biggest learning tool: Stubbornness.
I set my self a large-ish project (a game, just as gradbot suggests). I then decided to code using as much immutable data as possible regardless of the performance costs.
Then if I couldn't find a way to use immutable data I'd come here and ask for help.
The hoops this stubborn approach forced me to jump through has been a brilliant learning exercise, as (just as Juliet mentions) F# allows you to write really ugly C# if you let your self get lax.
Right now I have tile based 2D world and a little man who can find his way to the treasure and back home again using A* path-finding ... The only thing I mutate is the title of the window it's displayed in.
I only got serious about learning F# towards the end of July (I'd dabbled before then) and this project has taught me a huge amount, along with the help of the guys here at StackOverflow.
The other answers do a good job of suggesting ways to practice your F# writing skills, which is obviously very important. However, something which doesn't seem to have been emphasized much is reading F# code, which is just as important in my opinion. There are a lot of people doing cool stuff with F#, and one of the best ways to learn about the different corners of the language is to read about what they're doing. This will help you pick up idiomatic style and will also expose you to more design patterns, language features, and functional programming techniques than you would be likely to encounter if you were just solving a problem using F# on your own.
The F# blogs syndicated on Planet F# are a good place to start, but there are also several excellent books and presentations on the language.
This only answers part of your question, but as you talk about wanting to create libraries in F#, the F# component design guidelines just came out:
http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/manual/fsharp-component-design-guidelines.pdf
If you are interested in scientific libraries I suggest you to take a look at Jon Harrop's F# for Scientists.
Also to catter for your mathematician side I suggest you to read Doets-Van Eijck The Haskell Road to Logic, Maths and Programming, although written in Haskell, you will certainly be able to follow most of the text, and re-implementing the samples in F# could be a nice exercise.

What are the essential concepts all programmers should learn and use? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently learning to program, and I didn't take CS classes so I'm basically starting out on the bottom. I have been putting together code on and off for many years, but haven't really had a good understanding of essential concepts needed for enganging in bigger projects. Object-orientation is an obvious one, and I feel I'm beginning to understand some of the concepts there. Then there is a lot of buzz and methodology, such as MVC, UML, SCRUM, SOLID and so foth and so on.. I've looked at many of these but I'm always stumped as most explanations seem to require some understanding of other concepts.
I want to learn this stuff the "right" way, so where do I begin?
What are the overarching constructs I need to understand that enable me to understand all the underpinnings of software architecture/design/development?
What am I missing?
Are there constructs and concepts that can and should wait until I've cleared the foundation?
The SOLID principles are probably the most important.
From those you understand the motivation behind using a pattern such as MVC, why people think of persistence ignorance as important and so on. They are at the core of the majority of good practices.
Loose coupling, high cohesion.
And as for books, Code Complete covers almost everything at some level, at least.
Software development is a HUGE arena and you should be careful that you don't take on too much too quickly. Unless you're going to go in the direction of functional programming I'd suggest you start off by making sure you fully understand the concepts surrounding OO design and programming as this should be your foundation.
Once you understand that well you'll be able to understand design patterns a lot better and get a feeling for when to use them.
I'd suggest you try out a few languages till you find one you feel comfortable with, personally my favourite language is Ada which is a very pure OO language but in the business world I work in C# which still has a lot of issues but these are outweighed by the more vibrant job market.
I wouldn't worry too much about Scrum at this stage as you need to focus more on your dev skills before worrying about project management.
The most important thing is to work with as much code as possible, download lots of good reference solutions and work through the code till you understand it, and try and keep an eye on the development trends.
If its viable you may also want to considering attending some developer conferences too as these can be very inspirational.
Stay away from ACRONYMS (including those you've listed) and Methodologies(tm). At least in the beginning.
Read good books. Start with this one: Pragmatic Programmer. Learn algorithms and data structures, possibly from Introduction to algorithms by Cormen et al.
Write a lot of code. Practice is more important than anything else.
How to test software with unit tests. Being able to do that will solve 90% of all the other issue automatically since you can't test while they are around.
When you know how to test, you can start on advanced topics like design.
I'd recommend "Object Oriented Analysis and Design with Applications" by Grady Booch et al. The latest editoin has detailed explanation of concepts of OOAD including MVC, UML (which he invented), and discussions on how to manage the whole process of software development. The second part of the book exemplifies all this by developing 5 sample systems (with sometimes orthogonal aspects from the very core).
Another good one is of course Design Patterns by GoF which will give you an idea of loose coupling, ways to efficient encapsulation and reuse of code, etc
For what concerns the algorithmic part, take any book which is not bounded to a particular programming language. My favorite is Introduction to Algorithms by T. H. Cormen et al, it gets a bit theoretical at some points, but I especially like it when they are proving certain things and not just asking you to believe it.
When you are working with any modern general purpose language, it is probably a good idea to get a handle on patterns (MVC or Model-View-Controller is one). The book by the "gang of four" is a must read for this, or at least research a few and use it as a reference.
clicky
Refactoring is another concept that should be in your arsenal. The book by Martin Fowler on this subject is a very nice read and helps understand the aforementioned patterns better also a little explanation about UML is included.
Can't post more than one hyperlink so...
search on amazon for: Refactoring, Improving the design of existing code
When you want to communicate your designs UML (Unified Modelling Language) is the 'tool' of choice for many people. However UML is large and unwieldy but Martin Fowler (again) has managed to boil it down to the essentials.
search on amazon for: UML Distilled (make sure you get the most recent one)
SCRUM is one of many methods that is used to manage software development groups, I do not think there is much merit in learning that when you are just starting out or on your own. Especially not in detail.
Hope it helps...
PS: SOLID I haven't heard about yet, somebody else has to help you there.
You'd have a decent foundation if you surveyed basic Data Structures, Algorithms, and Algorithms Analysis.
I think that you should start coding real world problems to get a feel for problems in the programming domain.
Then you have a better background to understand why objects are important. Then, after managing objects, you will learn why patterns and OO principles are important.
Personally, I highly recommend the Agile Software Development, by Robert C Martin.
But it may be a long and tiresome read unless you have a feel for the problems being solved. I'm afraid that you may need 500-1000 hours of coding at the minimum before you get an appreciation that the problems being solved are real.
And it probably takes 7000+ hours before you develop an instinctive heart-felt pain from merely reading the problems, making this sort of book become the page-turner that it should be.
Regrettably, many of the sound practices that you should develop are only appreciated after having to live with your code over time. If you just do many excercises and abandon the code afterwards just "because it works", then you are missing out on the greatest pain of all. It is a luxury our industry does not have, and "technical debt" is a very very real and costly to those with large code bases.
I feel kinda silly answering my own question like this.. :) But one valuable resource I've found for learning to write code, is the Euler Project at http://www.projecteuler.net
It's basically a collection of mathmatical problems that you solve by writing your own solution to it. Once you've found the answer to a particular problem, you're allowed access to that problem's forum where different solutions are discussed. I was amazed at how much I was learning in a) solving a challenge, b) reading about other peoples approaches and c) how many programming languages there are out there! :)
The problems start out easy (you can tell by the number of people who's solved them) and progress to harder and harder problems.
Currently I'm working on problem #3, having solved the previous two... I recommend you start chippin' away at them, no matter your level!

What are good "real" programming examples for a beginning programmer? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I've been browsing Bjarne Stroustrup's new introductory programming book, Programming: Principles and Practice Using C++. It's meant for first-year university computer science and engineering students.
Early on in the book he works through an interesting extended example of creating a desktop calculator where he ends up implementing an arithmetic expression evaluator (that takes bracketed expressions and operator precedence into account) in a series of co-recursive functions based on a grammar.
This is a very interesting example, although arguably on the complex side for a lot of beginners.
I wonder what others thing of this particular example: would learning programming by seeing how to implement an expression parser excite and motivate you, or would it discourage you because of all the details and complexity?
Are there other good "real" programming examples for beginners?
When I was first learning to program, the best example I ever worked with was building a text adventure game from scratch. The basics just required knowing how to display text on the screen, receive input from the keyboard, and rudimentary flow control. But since text adventures always have room to add more features/puzzles/whatever, they can be easily adapted to explore aspects of whichever language you're learning.
Of course, not everyone finds games more interesting than calculators. It really depends on the programmer.
First, let me say that cognitive psychologists have proven in numerous studies that the most important factor in learning is desire to know.
If you want to learn about programming, you need to find a domain that stokes your desire to understand. Find a challenge that can be solved with programming.
I agree with the other folks when they suggest something that you are interested in. And games seem to be a common thread. As I reflect on my experience learning to program (too many years ago), math problems and a simple game was involved.
However, I don't think I really understood the power of software until I created a useful small program that helped a business person solve a real problem. There was a tremendous amount of motivation for me because I had a "client". I wasn't getting paid, but the client needed this program. There was sincere pain (gotta get my job done quicker) related to this situation.
So my advice is to talk to people you know and ask what small annoyance or computer-related obstacle to they have. Then try to fix it. It may be a simple web widget that reduces repetitive, manual tasks for an office worker.
One of my best early works was helping a little printing shop (no software, circa 1985) that struggled with estimating jobs to produce proposals that weren't money-losers. I asked alot of questions of the sales lady and of the operations manager. There was obviously an intersection of a common pain point with a really easy calculation that I could automate. It took me a couple of days to learn Lotus 1-2-3 (spreadsheet for you young-uns) enough to write a few macros. I was motivated. I had passion. I saw where I could make a difference. And that, more than anything else, drove me to learn some simple programming.
Having real people, real problems, and really simple solutions could be the inspiration you need as a beginning programmer. Don't try to write an accounting system. Just take one discreet piece of someone's frustration away. You can build on that success.
So, I wouldn't focus on the technique (yet). Don't worry about, "Am I doing this the most efficient way?" The main objective for a beginner is to have success, no matter how small, and build confidence.
BTW, that Lotus 1-2-3 set of macros grew into a full job tracking system. Very archaic, limited features, but made that little print shop much more profitable.
Create your motivation, fuel your desire, and develop your passion for programming like an artist unveils the masterpiece in a blob of clay. And be persistent. Don't give up when challenged with a roadblock. We all get stumped sometimes. Those are some of the best learning moments because humans learn more from failure than success.
Good luck.
I think making tiny games like text version of Tetris will be a good way of getting into pragramming world.
Board games are fun to design and code since they come in many shapes and difficulties
from tic-tac-toe to checkers to monopoly, its reinventing the wheel for educational purposes!
the best advice i can think of is to pick something from a field of interest you have because coding for the sake of coding might dim your resolve
Start small. Do examples that interest you. Stretch yourself just a little every time. Get comfortable with each step, to the point that you have confidence that you know what you're doing, and then try something a little harder the next time.
I think that any example program would help you learn a new language, but a beginner should try to work with something that is easy to understand in the real world, such as a mortgage calculator or something along those lines.
I think the answer is that it would depend on the person who is learning how to program.
One nice thing about something like an arithmetic expression evaluator is that it is a project that can start very small (make it work with just the format "X SYMBOL Y" where X and Y are single-digit numbers and SYMBOL must be a plus sign) and then you are slowly expanding the functionality to the point of a complicated system.
However, it might not be a great starter project for someone who doesn't really understand the concept of computers (hard disk, memory, etc.)
Try to think of something that you do on a computer that is repetitive, and could be easily automated. Then try to come up with how to make a program that automates that task for you. It can be anything, whether it's popping up a reminder every 15 minutes to stretch your legs or cleans up your temp directory on a regular basis.
The problem with this task is that it's conplex and not real life related. I don't need another calculator.
But once I had a CD with scratched surface near its center and lots of valuable JPEG files inside. I dumped the data from the unscratched part of the disk but all the filesystem was surely lost. So I wrote a program which analysed the dump and separated it into files. It was not very simple but was a nice and exciting file IO programming exercise.
Examples can be more complex than something you try to write yourself. It's easier to follow someone else doing something than it is to do it yourself. A real-world example like this calculator may be a fine way to introduce someone to a language. For instance, Practical Common Lisp starts with an example of an in-memory database (for CDs I think) and uses that as the springboard to explore parts of the language.
I prefer seeing a real example built up over time than just a lot of simple "Hello World" programs.
I've always found that implementing a game of some sort is sufficient incentive to learn various features of a language. Card games, especially, because they generally have simple rule sets to implement, but are sufficiently complex from an abstract point of view.
I would agree, though, with everyone else: find examples of things that interest you. Not everyone is a game fan, but something like a mortgage calculator would be far more interesting.

How to approach learning a new SDK/API/library?

Let's say that you have to implement some functionality that is not trivial (it will take at least 1 work week). You have a SDK/API/library that contains (numerous) code samples demonstrating the usage of the part of the SDK for implementing that functionality.
How do you approach learning all the samples, extract the necessary information, techniques, etc. in order to use them to implement the 'real thing'. The key questions are:
Do you use some tool for diagramming of the control flow, the interactions between the functions from the SDK, and the sample itself? Which kind of diagrams do you find useful? (I was thinking that the UML sequence diagram can be quite useful together with the debugger in this case).
How do you keep the relevant and often interrelated information about SDK/API function calls, the general structure and calls order in the sample programs that have to be used as a reference - mind maps, some plain text notes, added comments in the samples code, some refactoring of the sample code to suit your personal coding style in order to make the learning easier?
Personally I use the prototyping approach. Keep development to manageable iterations. In the beginning, those iterations are really small. As part of this, don't be afraid to throw code away and start again (everytime I say that somewhere a project manager has a heart attack).
If your particular task can't easily or reasonably be divided into really small starting tasks then start with some substitute until you get going.
You want to keep it as simple as you can (the proverbial "Hello world") just to familiarize yourself with building, deploying, debugging, what error messages look like, the simple things that can and do go wrong in the beginning, etc.
I don't go as far as using a diagramming tool sorry (I barely see the point in that for my job).
As soon as you start trying things you'll get the hang of it, even if in the beginning you have no idea of what's going on and why what you're doing works (or doesn't).
I usually compile and modify the examples, making them fit something that I need to do myself. I tend to do this while using and annotating the corresponding documents. Being a bit old school, the tool I usually use for diagramming is a pencil, or for the really complex stuff two or more colored pens.
I am by no means a seasoned programmer. In fact, I am learning C++ and I've been studying the language primarily from books. When I try to stray from the books (which happens a lot because I want to start contributing to programs like LibreOffice), for example, I find myself being lost. Furthermore, when I'm using functionality of the library, my implementations are wrong because I don't really understand how the library was created and/or why things need to be done that way. When I look at sample source code, I see how something is done, but I don't understand why it's done that way which leads to poor design of my programs. And as a result, I'm constantly guessing at how to do something and dealing with errors as I encounter them. Very unproductive and frustrating.
Going back to my book comment, two books which I have ready from cover to cover more than once are Ivor Horton's Beginning Visual C++ 2010 and Starting Out with C++: Early Objects (7th Edition). What I really loved about Ivor Horton's book is that it contained thorough explanation of why something needs to be done a certain way. For example, before any Windows programming began, lots of explanation about how Windows works was given first. Understanding how and why things work a certain way really helps in how I develop software.
So to contribute my two pennies towards answering your question. I think the best approach is to pick up well written books and sit down and begin learning about that library, API, SDK, whatever in a structured approach that offers real-world examples along with explanations as to how and why things are implemented as they are.
I don't know if I totally missed your question, but I don't think I did.
Cheers!
This was my first post on this site. Don't rip me too hard. (:

Textual versus Graphical Programming Languages [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am part of a high school robotics team, and there is some debate about which language to use to program our robot. We are choosing between C (or maybe C++) and LabVIEW. There are pros for each language.
C(++):
Widely used
Good preparation for the future (most programming positions require text-based programmers.)
We can expand upon our C codebase from last year
Allows us to better understand what our robot is doing.
LabVIEW
Easier to visualize program flow (blocks and wires, instead of lines of code)
Easier to teach (Supposedly...)
"The future of programming is graphical." (Think so?)
Closer to the Robolab background that some new members may have.
Don't need to intimately know what's going on. Simply tell the module to find the red ball, don't need to know how.
This is a very difficult decision for us, and we've been debating for a while. Based on those pros for each language, and on the experience you've got, what do you think the better option is? Keep in mind that we aren't necessarily going for pure efficiency. We also hope to prepare our programmers for a future in programming.
Also:
Do you think that graphical languages such as LabVEIW are the future of programming?
Is a graphical language easier to learn than a textual language? I think that they should be about equally challenging to learn.
Seeing as we are partailly rooted in helping people learn, how much should we rely on prewritten modules, and how much should we try to write on our own? ("Good programmers write good code, great programmers copy great code." But isn't it worth being a good programmer, first?)
Thanks for the advice!
Edit:
I'd like to emphasize this question more:
The team captain thinks that LabVIEW is better for its ease of learning and teaching. Is that true? I think that C could be taught just as easily, and beginner-level tasks would still be around with C. I'd really like to hear your opinions. Is there any reason that typing while{} should be any more difficult than creating a "while box?" Isn't it just as intuitive that program flows line by line, only modified by ifs and loops, as it is intuitive that the program flows through the wire, only modified by ifs and loops!?
Thanks again!
Edit:
I just realized that this falls under the topic of "language debate." I hope it's okay, because it's about what's best for a specific branch of programming, with certain goals. If it's not... I'm sorry...
Before I arrived, our group (PhD scientists, with little programming background) had been trying to implement a LabVIEW application on-and-off for nearly a year. The code was untidy, too complex (front and back-end) and most importantly, did not work. I am a keen programmer but had never used LabVIEW. With a little help from a LabVIEW guru who could help translate the textual progamming paradigms I knew into LabVIEW concepts it was possible to code the app in a week. The point here is that the basic coding concepts still have to be learnt, the language, even one like LabVIEW, is just a different way of expressing them.
LabVIEW is great to use for what it was originally designed for. i.e. to take data from DAQ cards and display it on-screen perhaps with some minor manipulations in-between. However, programming algorithms is no easier and I would even suggest that it is more difficult. For example, in most procedural languages execution order is generally followed line by line, using pseudo mathematical notation (i.e. y = x*x + x + 1) whereas LabVIEW would implement this using a series of VI's which don't necessarily follow from each other (i.e. left-to-right) on the canvas.
Moreover programming as a career is more than knowing the technicalities of coding. Being able to effectively ask for help/search for answers, write readable code and work with legacy code are all key skills which are undeniably more difficult in a graphical language such as LabVIEW.
I believe some aspects of graphical programming may become mainstream - the use of sub-VIs perfectly embodies the 'black-box' principal of programming and is also used in other language abstractions such as Yahoo Pipes and the Apple Automator - and perhaps some future graphical language will revolutionise the way we program but LabVIEW itself is not a massive paradigm shift in language design, we still have while, for, if flow control, typecasting, event driven programming, even objects. If the future really will be written in LabVIEW, C++ programmer won't have much trouble crossing over.
As a postcript I'd say that C/C++ is more suited to robotics since the students will no doubt have to deal with embedded systems and FPGAs at some point. Low level programming knowledge (bits, registers etc.) would be invaluable for this kind of thing.
#mendicant Actually LabVIEW is used a lot in industry, especially for control systems. Granted NASA unlikely use it for on-board satellite systems but then software developement for space-systems is a whole different ball game...
I've encountered a somewhat similar situation in the research group I'm currently working in. It's a biophysics group, and we're using LabVIEW all over the place to control our instruments. That works absolutely great: it's easy to assemble a UI to control all aspects of your instruments, to view its status and to save your data.
And now I have to stop myself from writing a 5 page rant, because for me LabVIEW has been a nightmare. Let me instead try to summarize some pros and cons:
Disclaimer I'm not a LabVIEW expert, I might say things that are biased, out-of-date or just plain wrong :)
LabVIEW pros
Yes, it's easy to learn. Many PhD's in our group seem to have acquired enough skills to hack away within a few weeks, or even less.
Libraries. This is a major point. You'd have to carefully investigate this for your own situation (I don't know what you need, if there are good LabVIEW libraries for it, or if there are alternatives in other languages). In my case, finding, e.g., a good, fast charting library in Python has been a major problem, that has prevented me from rewriting some of our programs in Python.
Your school may already have it installed and running.
LabVIEW cons
It's perhaps too easy to learn. In any case, it seems no one really bothers to learn best practices, so programs quickly become a complete, irreparable mess. Sure, that's also bound to happen with text-based languages if you're not careful, but IMO it's much more difficult to do things right in LabVIEW.
There tend to be major issues in LabVIEW with finding sub-VIs (even up to version 8.2, I think). LabVIEW has its own way of knowing where to find libraries and sub-VIs, which makes it very easy to completely break your software. This makes large projects a pain if you don't have someone around who knows how to handle this.
Getting LabVIEW to work with version control is a pain. Sure, it can be done, but in any case I'd refrain from using the built-in VC. Check out LVDiff for a LabVIEW diff tool, but don't even think about merging.
(The last two points make working in a team on one project difficult. That's probably important in your case)
This is personal, but I find that many algorithms just don't work when programmed visually. It's a mess.
One example is stuff that is strictly sequential; that gets cumbersome pretty quickly.
It's difficult to have an overview of the code.
If you use sub-VI's for small tasks (just like it's a good practice to make functions that perform a small task, and that fit on one screen), you can't just give them names, but you have to draw icons for each of them. That gets very annoying and cumbersome within only a few minutes, so you become very tempted not to put stuff in a sub-VI. It's just too much of a hassle. Btw: making a really good icon can take a professional hours. Go try to make a unique, immediately understandable, recognizable icon for every sub-VI you write :)
You'll have carpal tunnel within a week. Guaranteed.
#Brendan: hear, hear!
Concluding remarks
As for your "should I write my own modules" question: I'm not sure. Depends on your time constraints. Don't spend time on reinventing the wheel if you don't have to. It's too easy to spend days on writing low-level code and then realize you've run out of time. If that means you choose LabVIEW, go for it.
If there'd be easy ways to combine LabVIEW and, e.g., C++, I'd love to hear about it: that may give you the best of both worlds, but I doubt there are.
But make sure you and your team spend time on learning best practices. Looking at each other's code. Learning from each other. Writing usable, understandable code. And having fun!
And please forgive me for sounding edgy and perhaps somewhat pedantic. It's just that LabVIEW has been a real nightmare for me :)
I think the choice of LabVIEW or not comes down to whether you want to learn to program in a commonly used language as a marketable skill, or just want to get stuff done. LabVIEW enables you to Get Stuff Done very quickly and productively. As others have observed, it doesn't magically free you from having to understand what you're doing, and it's quite possible to create an unholy mess if you don't - although anecdotally, the worst examples of bad coding style in LabVIEW are generally perpetrated by people who are experienced in a text language and refuse to adapt to how LabVIEW works because they 'already know how to program, dammit!'
That's not to imply that LabVIEW programming isn't a marketable skill, of course; just that it's not as mass-market as C++.
LabVIEW makes it extremely easy to manage different things going on in parallel, which you may well have in a robot control situation. Race conditions in code that should be sequential shouldn't be a problem either (i.e. if they are, you're doing it wrong): there are simple techniques for making sure that stuff happens in the right order where necessary - chaining subVI's using the error wire or other data, using notifiers or queues, building a state machine structure, even using LabVIEW's sequence structure if necessary. Again, this is simply a case of taking the time to understand the tools available in LabVIEW and how they work. I don't think the gripe about having to make subVI icons is very well directed; you can very quickly create one containing a few words of text, maybe with a background colour, and that will be fine for most purposes.
'Are graphical languages the way of the future' is a red herring based on a false dichotomy. Some things are well suited to graphical languages (parallel code, for instance); other things suit text languages much better. I don't expect LabVIEW and graphical programming to either go away, or take over the world.
Incidentally, I would be very surprised if NASA didn't use LabVIEW in the space program. Someone recently described on the Info-LabVIEW mailing list how they had used LabVIEW to develop and test the closed loop control of flight surfaces actuated by electric motors on the Boeing 787, and gave the impression that LabVIEW was used extensively in the plane's development. It's also used for real-time control in the Large Hadron Collider!
The most active place currently for getting further information and help with LabVIEW, apart from National Instruments' own site and forums, seems to be LAVA.
This doesn't answer you question directly, but you may want to consider a third option of mixing in an interpreted language. Lua, for example, is already used in the robotics field. It's fast, light-weight and can be configured to run with fixed-point numbers instead of floating-point since most microcontrollers don't have an FPU. Forth is another alternative with similar usage.
It should be pretty easy to write a thin interface layer in C and then let the students loose with interpreted scripts. You could even set it up to allow code to be loaded dynamically without recompiling and flashing a chip. This should reduce the iteration cycle and allow students to learn better by seeing results more quickly.
I'm biased against using visual tools like LabVIEW. I always seem to hit something that doesn't or won't work quite like I want it to do. So, I prefer the absolute control you get with textual code.
LabVIEW's other strength (besides libraries) is concurrency. It's a dataflow language, which means that the runtime can handle concurrency for you. So if you're doing something highly concurrent and don't want to have to do traditional synchronization, LabVIEW can help you there.
The future doesn't belong to graphical languages as they stand today. It belongs to whoever can come up with a representation of dataflow (or another concurrency-friendly type of programming) that's as straightforward as the graphical approach is, but is also parsable by the programmer's own tools.
There is a published study of the topic hosted by National Instruments:
A Study of Graphical vs. Textual Programming for Teaching DSP
It specifically looks at LabVIEW versus MATLAB (as opposed to C).
I think that graphical languages wil always be limited in expressivity compared to textual ones. Compare trying to communicate in visual symbols (e.g., REBUS or sign language) to communicating using words.
For simple tasks, using a graphical language is usually easier but for more intricate logic, I find that graphical languages get in the way.
Another debate implied in this argument, though, is declarative programming vs. imperative. Declarative is usually better for anything where you really don't need the fine-grained control over how something is done. You can use C++ in a declarative way but you would need more work up front to make it so, whereas LABView is designed as a declarative language.
A picture is worth a thousand words but if a picture represents a thousand words that you don't need and you can't change that, then in that case a picture is worthless. Whereas, you can create thousands of pictures using words, specifying every detail and even leading the viewer's focus explicitly.
LabVIEW lets you get started quickly, and (as others have already said) has a massive library of code for doing various test, measurement & control related things.
The single biggest downfall of LabVIEW, though, is that you lose all the tools that programmers write for themselves.
Your code is stored as VIs. These are opaque, binary files. This means that your code really isn't yours, it's LabVIEW's. You can't write your own parser, you can't write a code generator, you can't do automated changes via macros or scripts.
This sucks when you have a 5000 VI app that needs some minor tweak applied universally. Your only option is to go through every VI manually, and heaven help you if you miss a change in one VI off in a corner somewhere.
And yes, since it's binary, you can't do diff/merge/patch like you can with textual languages. This does indeed make working with more than one version of the code a horrific nightmare of maintainability.
By all means, use LabVIEW if you're doing something simple, or need to prototype, or don't plan to maintain your code.
If you want to do real, maintainable programming, use a textual language. You might be slower getting started, but you'll be faster in the long run.
(Oh, and if you need DAQ libraries, NI's got C++ and .Net versions of those, too.)
My first post here :) be gentle ...
I come from an embedded background in the automotive industry and now i'm in the defense industry. I can tell you from experience that C/C++ and LabVIEW are really different beasts with different purposes in mind. C/C++ was always used for the embedded work on microcontrollers because it was compact and compilers/tools were easy to come by. LabVIEW on the other hand was used to drive the test system (along with test stand as a sequencer). Most of the test equipment we used were from NI so LabVIEW provided an environment where we had the tools and the drivers required for the job, along with the support we wanted ..
In terms of ease of learning, there are many many resources out there for C/C++ and many websites that lay out design considerations and example algorithms on pretty much anything you're after freely available. For LabVIEW, the user community's probably not as diverse as C/C++, and it takes a little bit more effort to inspect and compare example code (have to have the right version of LabVIEW etc) ... I found LabVIEW pretty easy to pick up and learn, but there a nuisances as some have mentioned here to do with parallelism and various other things that require a bit of experience before you become aware of them.
So the conclusion after all that? I'd say that BOTH languages are worthwhile in learning because they really do represent two different styles of programming and it is certainly worthwhile to be aware and proficient at both.
Oh my God, the answer is so simple. Use LabView.
I have programmed embedded systems for 10 years, and I can say that without at least a couple months of infrastructure (very careful infrastructure!), you will not be as productive as you are on day 1 with LabView.
If you are designing a robot to be sold and used for the military, go ahead and start with C - it's a good call.
Otherwise, use the system that allows you to try out the most variety in the shortest amount of time. That's LabView.
I love LabVIEW. I would highly recommend it especially if the other remembers have used something similar. It takes a while for normal programmers to get used to it, but the result's are much better if you already know how to program.
C/C++ equals manage your own memory. You'll be swimming in memory links and worrying about them. Go with LabVIEW and make sure you read the documentation that comes with LabVIEW and watch out for race conditions.
Learning a language is easy. Learning how to program is not. This doesn't change even if it's a graphical language. The advantage of Graphical languages is that it is easier to visual what the code will do rather than sit there and decipher a bunch of text.
The important thing is not the language but the programming concepts. It shouldn't matter what language you learn to program in, because with a little effort you should be able to program well in any language. Languages come and go.
Disclaimer: I've not witnessed LabVIEW, but I have used a few other graphical languages including WebMethods Flow and Modeller, dynamic simulation languages at university and, er, MIT's Scratch :).
My experience is that graphical languages can do a good job of the 'plumbing' part of programming, but the ones I've used actively get in the way of algorithmics. If your algorithms are very simple, that might be OK.
On the other hand, I don't think C++ is great for your situation either. You'll spend more time tracking down pointer and memory management issues than you do in useful work.
If your robot can be controlled using a scripting language (Python, Ruby, Perl, whatever), then I think that would be a much better choice.
Then there's hybrid options:
If there's no scripting option for your robot, and you have a C++ geek on your team, then consider having that geek write bindings to map your C++ library to a scripting language. This would allow people with other specialities to program the robot more easily. The bindings would make a good gift to the community.
If LabVIEW allows it, use its graphical language to plumb together modules written in a textual language.
I think that graphical languages might be the language of the future..... for all those adhoc MS Access developers out there. There will always be a spot for the purely textual coders.
Personally, I've got to ask what is the real fun of building a robot if it's all done for you? If you just drop a 'find the red ball' module in there and watch it go? What sense of pride will you have for your accomplishment? Personally, I wouldn't have much. Plus, what will it teach you of coding, or of the (very important) aspect of the software/hardware interface that is critical in robotics?
I don't claim to be an expert in the field, but ask yourself one thing: Do you think that NASA used LabVIEW to code the Mars Rovers? Do you think that anyone truly prominent in robotics is using LabView?
Really, if you ask me, the only thing using cookie cutter things like LabVIEW to build this is going to prepare you for is to be some backyard robot builder and nothing more. If you want something that will give you something more like industry experience, build your own 'LabVIEW'-type system. Build your own find-the-ball module, or your own 'follow-the-line' module. It will be far more difficult, but it will also be way more cool too. :D
You're in High School. How much time do you have to work on this program? How many people are in your group? Do they know C++ or LabView already?
From your question, I see that you know C++ and most of the group does not. I also suspect that the group leader is perceptive enough to notice that some members of the team may be intimidated by a text based programming language. This is acceptable, you're in high school, and these people are normies. I feel as though normal high schoolers will be able to understand LabView more intuitively than C++. I'm guessing most high school students, like the population in general, are scared of a command line. For you there is much less of a difference, but for them, it is night and day.
You are correct that the same concepts may be applied to LabView as C++. Each has its strengths and weaknesses. The key is selecting the right tool for the job. LabView was designed for this kind of application. C++ is much more generic and can be applied to many other kinds of problems.
I am going to recommend LabView. Given the right hardware, you can be up and running almost out-of-the-box. Your team can spend more time getting the robot to do what you want, which is what the focus of this activity should be.
Graphical Languages are not the future of programming; they have been one of the choices available, created to solve certain types of problems, for many years. The future of programming is layer upon layer of abstraction away from machine code. In the future, we'll be wondering why we wasted all this time programming "semantics" over and over.
how much should we rely on prewritten modules, and how much should we try to write on our own?
You shouldn't waste time reinventing the wheel. If there are device drivers available in Labview, use them. You can learn a lot by copying code that is similar in function and tailoring it to your needs - you get to see how other people solved similar problems, and have to interpret their solution before you can properly apply it to your problem. If you blindly copy code, chances of getting it to work are slim. You have to be good, even if you copy code.
Best of luck!
I would suggest you use LabVIEW as you can get down to making the robot what you want to do faster and easier. LabVIEW has been designed with this mind. OfCourse C(++) are great languages, but LabVIEW does what it is supposed to do better than anything else.
People can write really good software in LabVIEW as it provides ample scope and support for that.
There is one huge thing I found negative in using LabVIEW for my applications: Organize design complexity. As a physisist I find Labview great for prototyping, instrument control and mathematical analysis. There is no language in which you get faster and better a result then in LabVIEW. I used LabView since 1997. Since 2005 I switched completely to the .NET framework, since it is easier to design and maintain.
In LabVIEW a simple 'if' structure has to be drawn and uses a lot of space on your graphical design. I just found out that many of our commercial applications were hard to maintain. The more complex the application became, the more difficult it was to read.
I now use text laguages and I am much better in maintaining everything. If you would compare C++ to LabVIEW I would use LabVIEW, but compared to C# it does not win
As allways, it depends.
I am using LabVIEW since about 20 years now and did quite a large kind of jobs, from simple DAQ to very complex visualization, from device controls to test sequencers. If it was not good enough, I for sure would have switched. That said, I started coding Fortran with punchcards and used a whole lot of programming languages on 8-bit 'machines', starting with Z80-based ones. The languages ranged from Assembler to BASIC, from Turbo-Pascal to C.
LabVIEW was a major improvement because of its extensive libraries for data acqusition and analysis. One has, however, to learn a different paradigma. And you definitely need a trackball ;-))
I don't anything about LabView (or much about C/C++), but..
Do you think that graphical languages such as LabVEIW are the future of programming?
No...
Is a graphical language easier to learn than a textual language? I think that they should be about equally challenging to learn.
Easier to learn? No, but they are easier to explain and understand.
To explain a programming language you have to explain what a variable is (which is surprisingly difficult). This isn't a problem with flowgraph/nodal coding tools, like the LEGO Mindstroms programming interface, or Quartz Composer..
For example, in this is a fairly complicated LEGO Mindstroms program - it's very easy to understand what is going in... but what if you want the robot to run the INCREASEJITTER block 5 times, then drive forward for 10 seconds, then try the INCREASEJITTER loop again? Things start getting messy very quickly..
Quartz Composer is a great exmaple of this, and why I don't think graphical languages will ever "be the future"
It makes it very easy to really cool stuff (3D particle effects, with a camera controlled by the average brightness of pixels from a webcam).. but incredibly difficult to do easy things, like iterate over the elements from an XML file, or store that average pixel value into a file.
Seeing as we are partailly rooted in helping people learn, how much should we rely on prewritten modules, and how much should we try to write on our own? ("Good programmers write good code, great programmers copy great code." But isn't it worth being a good programmer, first?)
For learning, it's so much easier to both explain and understand a graphical language..
That said, I would recommend a specialised text-based language language as a progression. For example, for graphics something like Processing or NodeBox. They are "normal" languages (Processing is Java, NodeBox is Python) with very specialised, easy to use (but absurdly powerful) frameworks ingrained into them..
Importantly, they are very interactive languages, you don't have to write hundreds of lines just to get a circle onscreen.. You just type oval(100, 200, 10, 10) and press the run button, and it appears! This also makes them very easy to demonstrate and explain.
More robotics-related, even something like LOGO would be a good introduction - you type "FORWARD 1" and the turtle drives forward one box.. Type "LEFT 90" and it turns 90 degrees.. This relates to reality very simply. You can visualise what each instruction will do, then try it out and confirm it really works that way.
Show them shiney looking things, they will pickup the useful stuff they'd learn from C along the way, if they are interested or progress to the point where they need a "real" language, they'll have all that knowledge, rather than run into the syntax-error and compiling brick-wall..
It seems that if you are trying to prepare our team for a future in programming that C(++) ma be the better route. The promise of general programming languages that are built with visual building blocks has never seemed to materialize and I am beginning to wonder if they ever will. It seems that while it can be done for specific problem domains, once you get into trying to solve many general problems a text based programming language is hard to beat.
At one time I had sort of bought into the idea of executable UML but it seems that once you get past the object relationships and some of the process flows UML would be a pretty miserable way to build an app. Imagine trying to wire it all up to a GUI. I wouldn't mind being proven wrong but so far it seems unlikely we'll be point and click programming anytime soon.
I started with LabVIEW about 2 years ago and now use it all the time so may be biased but find it ideal for applications where data acquisition and control are involved.
We use LabVIEW mainly for testing where we take continuous measurements and control gas valves and ATE enclosures. This involves both digital and analogue input and outputs with signal analysis routines and process control all running from a GUI. By breaking down each part into subVIs we are able to reconfigure the tests with the click and drag of the mouse.
Not exactly the same as C/C++ but a similar implementation of measurement, control and analysis using Visual BASIC appears complex and hard to maintain by comparision.
I think the process of programming is more important than the actual coding language and you should follow the style guidelines for a graphical programming language. LabVIEW block diagrams show the flow of data (Dataflow programming) so it should be easy to see potential race conditions although I've never had any problems. If you have a C codebase then building it into a dll will allow LabVIEW to call it directly.
There are definitely merits to both choices; however, since your domain is an educational experience I think a C/C++ solution would most benefit the students. Graphical programming will always be an option but simply does not provide the functionality in an elegant manner that would make it more efficient to use than textual programming for low-level programming. This is not a bad thing - the whole point of abstraction is to allow a new understanding and view of a problem domain. The reason I believe many may be disappointed with graphical programming though is that, for any particular program, the incremental gain in going from programming in C to graphical is not nearly the same as going from assembly to C.
Knowledge of graphical programming would benefit any future programmer for sure. There will probably be opportunities in the future that only require knowledge of graphical programming and perhaps some of your students could benefit from some early experience with it. On the other hand, a solid foundation in fundamental programming concepts afforded by a textual approach will benefit all of your students and surely must be the better answer.
The team captain thinks that LabVIEW
is better for its ease of learning and
teaching. Is that true?
I doubt that would be true for any single language, or paradigm. LabView could surely be easier for people with electronics engineering background; making programs in it is "simply" drawing wires. Then again, such people might already be exposed to programming, as well.
One essential difference - apart from from the graphic - is that LV is demand based (flow) programming. You start from the outcome and tell, what is needed to get to it. Traditional programming tends to be imperative (going the other way round).
Some languages can provide the both. I crafted a multithreading library for Lua recently (Lanes) and it can be used for demand-based programming in an otherwise imperative environment. I know there are successful robots run mostly in Lua out there (Crazy Ivan at Lua Oh Six).
Have you had a look at the Microsoft Robotics Studio?
http://msdn.microsoft.com/en-us/robotics/default.aspx
It allows for visual programming (VPL):
http://msdn.microsoft.com/en-us/library/bb483047.aspx
as well as modern languages such as C#.
I encourage you to at least take a look at the tutorials.
My gripe against Labview (and Matlab in this respect) is that if you plan on embedding the code in anything other than x86 (and Labview has tools to put Labview VIs on ARMs) then you'll have to throw as much horsepower at the problem as you can because it's inefficient.
Labview is a great prototyping tool: lots of libraries, easy to string together blocks, maybe a little difficult to do advanced algorithms but there's probably a block for what you want to do. You can get functionality done quickly. But if you think you can take that VI and just put it on a device you're wrong. For instance, if you make an adder block in Labview you have two inputs and one output. What is the memory usage for that? Three variables worth of data? Two? In C or C++ you know, because you can either write z=x+y or x+=y and you know exactly what your code is doing and what the memory situation is. Memory usage can spike quickly especially because (as others have pointed out) Labview is highly parallel. So be prepared to throw more RAM than you thought at the problem. And more processing power.
In short, Labview is great for rapid prototyping but you lose too much control in other situations. If you're working with large amounts of data or limited memory/processing power then use a text-based programming language so you can control what goes on.
People always compare labview with C++ and day "oh labview is high level and it has so much built in functionality try acquiring data doing a dfft and displaying the data its so easy in labview try it in C++".
Myth 1: It's hard to get anything done with C++ its because its so low level and labview has many things already implemented.
The problem is if you are developing a robotic system in C++ you MUST use libraries like opencv , pcl .. ect and you would be even more smarter if you use a software framework designed for building robotic systems like ROS (robot operating system). Therefore you need to use a full set of tools. Infact there are more high level tools available when you use, ROS + python/c++ with libraries such as opencv and pcl. I have used labview robotics and frankly commonly used algorithms like ICP are not there and its not like you can use other libraries easily now.
Myth2: Is it easier to understand graphical programming languages
It depends on the situation. When you are coding a complicated algorithm the graphical elements will take up valuable screen space and it will be difficult to understand the method. To understand labview code you have to read over an area that is O(n^2) complexity in code you just read top to bottom.
What if you have parallel systems. ROS implements a graph based architecture based on subcriber/publisher messages implemented using callback and its pretty easy to have multiple programs running and communicating. Having individual parallel components separated makes it easier to debug. For instance stepping through parallel labview code is a nightmare because control flow jumps form one place to another. In ROS you don't explicitly 'draw out your archietecture like in labview, however you can still see it my running the command ros run rqt_graph ( which will show all connected nodes)
"The future of programming is graphical." (Think so?)
I hope not, the current implementation of labview does not allow coding using text-based methods and graphical methods. ( there is mathscript , however this is incredibly slow)
Its hard to debug because you cant hide the parallelism easily.
Its hard to read labview code because there you have to look over so much area.
Labview is great for data aq and signal processing but not experimental robotics, because most of the high level components like SLAM (simultaneous localisation and mapping), point cloud registration, point cloud processing ect are missing. Even if they do add these components and they are easy to integrate like in ROS, because labview is proprietary and expensive they will never keep up with the open source community.
In summary if labview is the future for mechatronics i am changing my career path to investment banking... If i can't enjoy my work i may as well make some money and retire early...

Resources