What to study to get into robotics? [closed] - robotics

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What should someone study at university level if he/she wants to get into robotics and build robotics? So far 'Mechatronics' seems to be the field I'm looking for? I looked at a few plain 'robotics' courses but they seem to be only about the electrical and computer work, and don't include any details on building the mechanical components of robots?

I'm a professional robotics research consultant, with 30 years of experience working for organizations like SRI International and JPL.
Like computers, robotics has quite a strong divide between the software and the hardware. Hardware is further subdivided into actuators and sensors.
If you'd said "I want to get into computers", I would explain that only a few hardware engineers actually design and build physical computers--most researchers assume that the hardware and firmware has been built already, and then they worry about the software--how to make the system actually work.
Similarly with robots, building the hardware is a job for the mechanical engineers (to design the structure and heat dissipation), with little bits and pieces for power electrical engineers (to spec the motors) and computer engineers (to design the firmware silicon). Next-generation robots also use industrial designers (to make the outsides look pretty, and the insides fit well together).
Research areas for actuator design include fingered hands; tentacles; hummingbird and other bird and insect wings; springy wheels; legs; non-electronic designs for high radiation areas; and surgical instruments.
With cameras in every cell phone, vision sensors are mostly a solved problem at this point. Research areas for sensor design include smart flexible tactile skin, brain wave sensors, and other biomedical sensors. There's still some room for good force sensors as well. These fall in the realms of materials engineering, computer engineering, mechanical engineering, and biomedical engineering.
In order to drive the actuators properly so they don't shake themselves apart, you need a control-theory engineer. Start with Fourier transforms so that you can then understand z-transforms. The learning curve on this mathematics is extremely steep, and careers are quite few, so either you have to be born to be a controls engineer or you should let someone else handle these lower-level details for you.
Signal processing, for the medium- and low-level sensor drivers, has been under the domain of the EEs historically. This works its way up to image processing, which falls under computer science, and then image understanding, which is in the A.I. branch of CS.
However, as I mentioned, the hardware, firmware, and drivers are all manufacturing details that you solve once and then sell forever. Anybody can buy a Lego or a Bioloids kit off the shelf now, and start working with motors. It's not like 2006, when the Fujitsu HOAP humanoid robot we were working with at JPL was a $50,000 custom-ordered special.
Most of what I consider the really interesting work starts by assuming the hardware and drivers have already been accomplished--and then, what do you do with the system? This is completely in the realm of software.
Robotic software control starts with 3D simulators, which in turn are based on forward kinematics; eventually inverse kinematics; dynamics, if you feel like it; and physics-engine simulations. Math here centers around locations [position + orientation], which are best represented by using [4x4] homogeneous coordinate transformation matrices. These are not very hard, and you can get a good background in them from any computer graphics textbook. Make sure you follow the religion of post-multiplying by matrices ending in a column vector on the right; this allows you to chain base-to-waist-to-shoulder-to-elbow-to-hand kinematics in a way that you'll be able to understand. Early textbooks proposed premultiplying using row vectors, because they thought it wouldn't make a difference. It does.
Of course the physics engines require a decent knowledge of physics.
Higher-level processing is accomplished using artificial intelligence, usually rules-based systems. Natural-language processing also can tie in linguistics and phonetics. Speech recognition and speech generation are again mostly signal processing, taught in EE and CS.
Recent advances work on Big Data, which uses statistics, Bayesian reasoning, and bases vector spaces (from mathematics).
Robotics has not yet broken out. It is still at the level cell phones were at when Gordon Gecko was walking on the beach talking into a "portable phone" the size of a shoe. I don't see robots becoming ubiquitous before 2020. Around 2025, being a robot programmer will be in demand as much as being an app programmer is today. Study lots of A.I. Start early.
State-of-the-art humanoid robot system design as of 2006 [short movie]:
http://www.seqcon.com/caseJPL.html
Very high level block diagram of components [graphic]:
http://www.seqcon.com/images/SystemSchematic640.gif

I would highly recommend looking into Artificial Intelligence for Robotics on Udacity, it is very interesting course that covers the software and AI part. Also Coursera offers a free online robotics course, and other courses as well that are very relevant and useful to Robotics.

Mechanical and electrical engineering and computer science.
Mechanical engineering will inform choices about servos, linkages, gears, and all other mechanical components.
Control theory is the junction of mechanical and electrical engineering. You'll need that.
So much of control is digital these days, so EE and computer science will be a part of it.
It's a big field. Good luck.

Industrial robotics is usually handeled by Mechanical Engineers, and sport/team robotics by electical engr, electronics engr, or computer science majors. It all depends on what you mean by "robotics". Also, in case nobody else mentions it, a Masters degree is strongly encouranged.
As an added bonus the math used in industrial robotics, is directly linked to math for game development. There isn't really a clear cut line of who is supposed to be doing what in robotics.

Mechtronics is the current field of study for those interested in robotics. It combines mechanical, electrical, controls, and software as relates to robotics.
In the past we came from many different backgrounds, mechanical engineers, electrical, electronics, and software. I am an Application Engineer for robot manufacturer. I started out in Avionics, moved to automated test equipment, then to automated material delivery systems, I became a robotics service technician and manager then moved over to application programing and training.
One final note, be prepared to keep learning. This is a field that is constantly changing and evolving.

Related

How does PC/phone recognize person with one pic?

Recently, I'm studying about facial recognition with OpenCV, and I'm trying some simple example based on study.
I'm considering to use it at front door condition.
Nowadays some buildings or apartments use facial recognition for preventing intruders. When someone joins them (such as company or houses), they require the person's picture. As I know, they require just one picture.
I didn't care about that last time, but now, I'm very curious about it.
The famous algorithms such as PCA, LDA use machine learning, so they increase successful percentages(cases). To use machine learning, they need sample images as many as I can provide. That's why I'm curious about that. Buildings or companys require just one picture, but they can recognize each person. Moreover, their accuracy is very good. How can this happen? Is there any other algorithm besides PCA or LDA?
Thanks for reading!
As far as I know, this hasn't been achieved yet. So I don't think they can develop a software recognizing a person by using only one picture.
It is most likely that they teach the algorithm with the authorized person's pictures. So if that one picture does not match with the trained ones, the algorithm can say this is an intrusion.
Edit:
As linuxqwerty pointed out those commercial products are already trained with huge datasets.
As a result of this training, learning happens and the algorithm achieves feature extraction of all those sample faces.
Then the algorithm knows almost every kinds of features that an human face can have.
For example: thickness of eyebrows, distance between eyes, roundness of chin... These are only a human can say about faces. The algorithm can extract thousands of these features.
It can keep faces as a representation of those features.
So now we have this commercial software which can represent faces as binary codes with a lot of digits.
I am getting your question again.
The apartment or company bought this software.
They included the picture of authorized person.
What the software does is simply converting the picture as it was a thousand digits password.
So that person has this unique password which the system can only reproduce that password only from his face.
To sum up:
The learning part was achieved using big face databases.
Thanks to learning part, the recognition part can be done by using only one picture.
PS: Corrections are welcome.
I happened to read about facial recognition before, that time I wanted to do it as my semester project. And of course I have heard and thought of using OpenCV as well.
Your question is simple, those company or home that use facial recognition, they usually use very well-developed product, which normally includes well-programmed facial recognition. As we are talking about security here, normally companies will buy these security products, unless if they just want to use it as a tool to deter intruders which focus less on the practical usage, and recognition accuracy, they can opt for free facial recognition software.
So, when I'm talking about well-programmed facial recognition, it means that it was trained with huge amount of databases (the photos to be recognized that you mentioned), this means the training is done even before the software is officially launched, which is during the development stage. A good facial recognition software requires both good, complete and detailed programming coding, and also huge photo databases (taken at different ambient light intensity, different facial features like hair style, spectacles) to train it.
Therefore, the accuracy of the software does not depend solely on the amount of pictures given during the usage of the software provided that it is well-programmed in the first place. Thanks and hope I answered your question and wonder.
ps: recognize is spelled this way (US); recognise (UK) =)

Is there an algorithm to describe a portrait of a person in words?

I'm looking an algorithm that analyzes a portrait-photo of a person and outputs a descriptive text like "young man, rather long nose, green eyes".
It doesn't matter if the output is very precise or not; it is for an art installation. But it should be possible to do it automatic.
I found this one: https://code.google.com/p/deep-learning-faces/, but it is impossible for me to fulfill the hardware and software requirements (NVIDIA Fermi GPUs & matlab)
Do you know of anything more accessible?
There are a few free face analyser APIs that are fairly easy to use:
Rekognition, by Orbeus
MP Face Analyzer SDK (evaluation) by MotionPortrait
Faceplusplus (linked above)
You might have to take measurements of an "average face" to make interpretations like "long nose". ToonifyMe is an app that caricatures faces using this approach.
Some of these API's can actually work on a Pi. Recognition does the analysis in the cloud, so that should be doable.
This is one of the hardest problems in computer vision. I'd recommend you watch the ted talk by Fei-Fei Li to get an understanding of it:
https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures
In short: If you want to use any of the state-of-the-art methods you will need a lot of processing power. A lot more than just a single high-end graphics card, I'm talking about super computing here.
And unless you're really lucky and find a research group that has released their implementation, this also requires a huge amount of engineering.
I found this online service that describes faces: http://www.faceplusplus.com/
It has a very well documented API and seems to be free of charge. Or at least I didn't find any information about pricing.

How to train an artificial neural network to play Diablo 2 using visual input?

I'm currently trying to get an ANN to play a video game and and I was hoping to get some help from the wonderful community here.
I've settled on Diablo 2. Game play is thus in real-time and from an isometric viewpoint, with the player controlling a single avatar whom the camera is centered on.
To make things concrete, the task is to get your character x experience points without having its health drop to 0, where experience point are gained through killing monsters. Here is an example of the gameplay:
Now, since I want the net to operate based solely on the information it gets from the pixels on the screen, it must learn a very rich representation in order to play efficiently, since this would presumably require it to know (implicitly at least) how divide the game world up into objects and how to interact with them.
And all of this information must be taught to the net somehow. I can't for the life of me think of how to train this thing. My only idea is have a separate program visually extract something innately good/bad in the game (e.g. health, gold, experience) from the screen, and then use that stat in a reinforcement learning procedure. I think that will be part of the answer, but I don't think it'll be enough; there are just too many levels of abstraction from raw visual input to goal-oriented behavior for such limited feedback to train a net within my lifetime.
So, my question: what other ways can you think of to train a net to do at least some part of this task? preferably without making thousands of labeled examples.
Just for a little more direction: I'm looking for some other sources of reinforcement learning and/or any unsupervised methods for extracting useful information in this setting. Or a supervised algorithm if you can think of a way of getting labeled data out of a game world without having to manually label it.
UPDATE(04/27/12):
Strangely, I'm still working on this and seem to be making progress. The biggest secret to getting a ANN controller to work is to use the most advanced ANN architectures appropriate to the task. Hence I've been using a deep belief net composed of factored conditional restricted Boltzmann machines that I've trained in an unsupervised manner (on video of me playing the game) before fine tuning with temporal difference back-propagation (i.e. reinforcement learning with standard feed-forward ANNs).
Still looking for more valuable input though, especially on the problem of action selection in real-time and how to encode color images for ANN processing :-)
UPDATE(10/21/15):
Just remembered I asked this question back-in-the-day, and thought I should mention that this is no longer a crazy idea. Since my last update, DeepMind published their nature paper on getting neural networks to play Atari games from visual inputs. Indeed, the only thing preventing me from using their architecture to play, a limited subset, of Diablo 2 is the lack of access to the underlying game engine. Rendering to the screen and then redirecting it to the network is just far too slow to train in a reasonable amount of time. Thus we probably won't see this sort of bot playing Diablo 2 anytime soon, but only because it'll be playing something either open-source or with API access to the rendering target. (Quake perhaps?)
I can see that you are worried about how to train the ANN, but this project hides a complexity that you might not be aware of. Object/character recognition on computer games through image processing it's a highly challenging task (not say crazy for FPS and RPG games). I don't doubt of your skills and I'm also not saying it can't be done, but you can easily spend 10x more time working on recognizing stuff than implementing the ANN itself (assuming you already have experience with digital image processing techniques).
I think your idea is very interesting and also very ambitious. At this point you might want to reconsider it. I sense that this project is something you are planning for the university, so if the focus of the work is really ANN you should probably pick another game, something more simple.
I remember that someone else came looking for tips on a different but somehow similar project not too long ago. It's worth checking it out.
On the other hand, there might be better/easier approaches for identifying objects in-game if you're accepting suggestions. But first, let's call this project for what you want it to be: a smart-bot.
One method for implementing bots accesses the memory of the game client to find relevant information, such as the location of the character on the screen and it's health. Reading computer memory is trivial, but figuring out exactly where in memory to look for is not. Memory scanners like Cheat Engine can be very helpful for this.
Another method, which works under the game, involves manipulating rendering information. All objects of the game must be rendered to the screen. This means that the locations of all 3D objects will eventually be sent to the video card for processing. Be ready for some serious debugging.
In this answer I briefly described 2 methods to accomplish what you want through image processing. If you are interested in them you can find more about them on Exploiting Online Games (chapter 6), an excellent book on the subject.
UPDATE 2018-07-26: That's it! We are now approaching the point where this kind of game will be solvable! Using OpenAI and based on the game DotA 2, a team could make an AI that can beat semi-professional gamers in a 5v5 game. If you know DotA 2, you know this game is quite similar to Diablo-like games in terms of mechanics, but one could argue that it is even more complicated because of the team play.
As expected, this was achieved thanks to the latest advances in reinforcement learning with deep learning, and using open game frameworks like OpenAI which eases the development of an AI since you get a neat API and also because you can accelerate the game (the AI played the equivalent of 180 years of gameplay against itself everyday!).
On the 5th of August 2018 (in 10 days!), it is planned to pit this AI against top DotA 2 gamers. If this works out, expect a big revolution, maybe not as mediatized as the solving of the Go game, but it will nonetheless be a huge milestone for games AI!
UPDATE 2017-01: The field is moving very fast since AlphaGo's success, and there are new frameworks to facilitate the development of machine learning algorithms on games almost every months. Here is a list of the latest ones I've found:
OpenAI's Universe: a platform to play virtually any game using machine learning. The API is in Python, and it runs the games behind a VNC remote desktop environment, so it can capture the images of any game! You can probably use Universe to play Diablo II through a machine learning algorithm!
OpenAI's Gym: Similar to Universe but targeting reinforcement learning algorithms specifically (so it's kind of a generalization of the framework used by AlphaGo but to a lot more games). There is a course on Udemy covering the application of machine learning to games like breakout or Doom using OpenAI Gym.
TorchCraft: a bridge between Torch (machine learning framework) and StarCraft: Brood War.
pyGTA5: a project to build self-driving cars in GTA5 using only screen captures (with lots of videos online).
Very exciting times!
IMPORTANT UPDATE (2016-06): As noted by OP, this problem of training artificial networks to play games using only visual inputs is now being tackled by several serious institutions, with quite promising results, such as DeepMind Deep-Qlearning-Network (DQN).
And now, if you want to get to take on the next level challenge, you can use one of the various AI vision game development platforms such as ViZDoom, a highly optimized platform (7000 fps) to train networks to play Doom using only visual inputs:
ViZDoom allows developing AI bots that play Doom using only the visual information (the screen buffer). It is primarily intended for research in machine visual learning, and deep reinforcement learning, in particular.
ViZDoom is based on ZDoom to provide the game mechanics.
And the results are quite amazing, see the videos on their webpage and the nice tutorial (in Python) here!
There is also a similar project for Quake 3 Arena, called Quagents, which also provides easy API access to underlying game data, but you can scrap it and just use screenshots and the API only to control your agent.
Why is such a platform useful if we only use screenshots? Even if you don't access underlying game data, such a platform provide:
high performance implementation of games (you can generate more data/plays/learning generations with less time so that your learning algorithms can converge faster!).
a simple and responsive API to control your agents (ie, if you try to use human inputs to control a game, some of your commands may be lost, so you'd also deal with unreliability of your outputs...).
easy setup of custom scenarios.
customizable rendering (can be useful to "simplify" the images you get to ease processing)
synchronized ("turn-by-turn") play (so you don't need your algorithm to work in realtime at first, that's a huge complexity reduction).
additional convenience features such as crossplatform compatibility, retrocompatibility (you don't risk your bot not working with the game anymore when there is a new game update), etc.
To summarize, the great thing about these platforms is that they alleviate much of the previous technical issues you had to deal with (how to manipulate game inputs, how to setup scenarios, etc.) so that you just have to deal with the learning algorithm itself.
So now, get to work and make us the best AI visual bot ever ;)
Old post describing the technical issues of developping an AI relying only on visual inputs:
Contrary to some of my colleagues above, I do not think this problem is intractable. But it surely is a hella hard one!
The first problem as pointed out above is that of the representation of the state of the game: you can't represent the full state with just a single image, you need to maintain some kind of memorization (health but also objects equipped and items available to use, quests and goals, etc.). To fetch such informations you have two ways: either by directly accessing the game data, which is the most reliable and easy; or either you can create an abstract representation of these informations by implementing some simple procedures (open inventory, take a screenshot, extract the data). Of course, extracting data from a screenshot will either have you to put in some supervised procedure (that you define completely) or unsupervised (via a machine learning algorithm, but then it'll scale up a lot the complexity...). For unsupervised machine learning, you will need to use a quite recent kind of algorithms called structural learning algorithms (which learn the structure of data rather than how to classify them or predict a value). One such algorithm is the Recursive Neural Network (not to confuse with Recurrent Neural Network) by Richard Socher: http://techtalks.tv/talks/54422/
Then, another problem is that even when you have fetched all the data you need, the game is only partially observable. Thus you need to inject an abstract model of the world and feed it with processed information from the game, for example the location of your avatar, but also the location of quest items, goals and enemies outside the screen. You may maybe look into Mixture Particle Filters by Vermaak 2003 for this.
Also, you need to have an autonomous agent, with goals dynamically generated. A well-known architecture you can try is BDI agent, but you will probably have to tweak it for this architecture to work in your practical case. As an alternative, there is also the Recursive Petri Net, which you can probably combine with all kinds of variations of the petri nets to achieve what you want since it is a very well studied and flexible framework, with great formalization and proofs procedures.
And at last, even if you do all the above, you will need to find a way to emulate the game in accelerated speed (using a video may be nice, but the problem is that your algorithm will only spectate without control, and being able to try for itself is very important for learning). Indeed, it is well-known that current state-of-the-art algorithm takes a lot more time to learn the same thing a human can learn (even more so with reinforcement learning), thus if can't speed up the process (ie, if you can't speed up the game time), your algorithm won't even converge in a single lifetime...
To conclude, what you want to achieve here is at the limit (and maybe a bit beyond) of current state-of-the-art algorithms. I think it may be possible, but even if it is, you are going to spend a hella lot of time, because this is not a theoretical problem but a practical problem you are approaching here, and thus you need to implement and combine a lot of different AI approaches in order to solve it.
Several decades of research with a whole team working on it would may not suffice, so if you are alone and working on it in part-time (as you probably have a job for a living) you may spend a whole lifetime without reaching anywhere near a working solution.
So my most important advice here would be that you lower down your expectations, and try to reduce the complexity of your problem by using all the information you can, and avoid as much as possible relying on screenshots (ie, try to hook directly into the game, look for DLL injection), and simplify some problems by implementing supervised procedures, do not let your algorithm learn everything (ie, drop image processing for now as much as possible and rely on internal game informations, later on if your algorithm works well, you can replace some parts of your AI program with image processing, thus gruadually attaining your full goal, for example if you can get something to work quite well, you can try to complexify your problem and replace supervised procedures and memory game data by unsupervised machine learning algorithms on screenshots).
Good luck, and if it works, make sure to publish an article, you can surely get renowned for solving such a hard practical problem!
The problem you are pursuing is intractable in the way you have defined it. It is usually a mistake to think that a neural network would "magically" learn a rich reprsentation of a problem. A good fact to keep in mind when deciding whether ANN is the right tool for a task is that it is an interpolation method. Think, whether you can frame your problem as finding an approximation of a function, where you have many points from this function and lots of time for designing the network and training it.
The problem you propose does not pass this test. Game control is not a function of the image on the screen. There is a lot of information the player has to keep in memory. For a simple example, it is often true that every time you enter a shop in a game, the screen looks the same. However, what you buy depends on the circumstances. No matter how complicated the network, if the screen pixels are its input, it would always perform the same action upon entering the store.
Besides, there is the problem of scale. The task you propose is simply too complicated to learn in any reasonable amount of time. You should see aigamedev.com for how game AI works. Artitificial Neural Networks have been used successfully in some games, but in very limited manner. Game AI is difficult and often expensive to develop. If there was a general approach of constructing functional neural networks, the industry would have most likely seized on it. I recommend that you begin with much, much simpler examples, like tic-tac-toe.
Seems like the heart of this project is exploring what is possible with an ANN, so I would suggest picking a game where you don't have to deal with image processing (which from other's answers on here, seems like a really difficult task in a real-time game). You could use the Starcraft API to build your bot, they give you access to all relevant game state.
http://code.google.com/p/bwapi/
As a first step you might look at the difference of consecutive frames. You have to distinguish between background and actual monster sprites. I guess the world may also contain animations. In order to find those I would have the character move around and collect everything that moves with the world into a big background image/animation.
You could detect and and identify enemies with correlation (using FFT). However if the animations repeat pixel-exact it will be faster to just look at a few pixel values. Your main task will be to write a robust system that will identify when a new object appears on the screen and will gradually all the frames of the sprite frame to a database. Probably you have to build models for weapon effects as well. Those can should be subtracted so that they don't clutter your opponent database.
Well assuming at any time you could generate a set of 'outcomes' (might involve probabilities) from a set of all possible 'moves', and that there is some notion of consistency in the game (eg you can play level X over and over again), you could start with N neural networks with random weights, and have each of them play the game in the following way:
1) For every possible 'move', generate a list of possible 'outcomes' (with associated probabilities)
2) For each outcome, use your neural network to determine an associated 'worth' (score) of the 'outcome' (eg a number between -1 and 1, 1 being the best possible outcome, -1 being the worst)
3) Choose the 'move' leading to the highest prob * score
4) If the move led to a 'win' or 'lose', stop, otherwise go back to step 1.
After a certain amount of time (or a 'win'/'lose'), evaluate how close the neural network was to the 'goal' (this will probably involve some domain knowledge). Then throw out the 50% (or some other percentage) of NNs that were farthest away from the goal, do crossover/mutation of the top 50%, and run the new set of NNs again. Continue running until a satisfactory NN comes out.
I think your best bet would be a complex architecture involving a few/may networks: i.e. one recognizing and responding to items, one for the shop, one for combat (maybe here you would need one for enemy recognition, one for attacks), etc.
Then try to think of the simplest possible Diablo II gameplay, probably a Barbarian. Then keep it simple at first, like Act I, first area only.
Then I guess valuable 'goals' would be disappearance of enemy objects, and diminution of health bar (scored inversely).
Once you have these separate, 'simpler' tasks taken care of, you can use a 'master' ANN to decide which sub-ANN to activate.
As for training, I see only three options: you could use the evolutionary method described above, but then you need to manually select the 'winners', unless you code a whole separate program for that. You could have the networks 'watch' someone play. Here they will learn to emulate a player or group of player's style. The network tries to predict the player's next action, gets reinforced for a correct guess, etc. If you actually get the ANN you want this could be done with video gameplay, no need for actual live gameplay. Finally you could let the network play the game, having enemy deaths, level ups, regained health, etc. as positive reinforcement and player deaths, lost health, etc. as negative reinforcement. But seeing how even a simple network requires thousands of concrete training steps to learn even simple tasks, you would need a lot of patience for this one.
All in all your project is very ambitious. But I for one think it could 'in theory be done', given enough time.
Hope it helps and good luck!

Microsoft Robotics Studio, simple simulation

I am soon to start with Microsoft Robotics Studio.
My question is to all the gurus of MSRS, Can simple simulation (as obstacle avoidance and wall following) be done without any hardware ?
Does MSRS have 3-dimensional as well as 2-dimensional rendering? As of now I do not have any hardware and I am only interested in simulation, when I have the robot hardware I may try to interface it!
Sorry for a silly question, I am a MSRS noob, but have previous robotics h/w and s/w experience.
Other than MSRS and Player Project (Player/Stage/Gazebo) is there any other Software to simulate robots, effectively ?
MSRS tackles several key areas. One of them is simulation. The 3D engine is based on the AGeia Physics engine and can simulate not only your robot and its sensors, but a somewhat complex environment.
The demo I saw had a Pioneer with a SICK lidar running around a cluttered appartment living room, with tables, chairs and etc.
The idea is that your code doesn't even need to know if it's running on the simulator or the real robot.
Edit:
A few links as requested:
Start here: http://msdn.microsoft.com/en-us/library/dd939184.aspx
alt text http://i.msdn.microsoft.com/Dd939184.image001(en-us,MSDN.10).jpg
Then go here: http://msdn.microsoft.com/en-us/library/dd939190.aspx
alt text http://i.msdn.microsoft.com/Dd939190.image008(en-us,MSDN.10).jpg
Then take a look at some more samples: http://msdn.microsoft.com/en-us/library/cc998497.aspx
alt text http://i.msdn.microsoft.com/Cc998496.Sumo1(en-us,MSDN.10).jpg
simple answer is yes, MRDS simulator and player/stage have very similar capabilities. MRDS uses a video game quality physics engine under the hood, so you can do collisions, and some basic physics on your robots, but its not going to be the level of accuracy of a matlab simulation (on the flip side its realtime and easier to develop with though). You can do a lot in MRDS without any hardware.
MRDS uses some pretty advanced programming abstractions, so can be a bit intimidating at first, but do the tutorials, and the course that has been posted to codeplex "software engineering for robotics" and you will be fine. http://swrobotics.codeplex.com/

3D Character/Model Creator

I'm in a project to create a 3d game using XNA/C#, and the game will use a lot of 3d characters.
Looking at the current 3d games, in some they create near to hundreds of characters, what lead me to think that there are some good 3d character/model creator.
To narrow the sample, the game will have characters like the game "Grand Chase". There are some good (and easy) character model creator for to use in XNA development? Free is better, of course, but I will get payed versions too.
EDIT: Another question is about the movements of the characters. The movements like walk, jump, sit, etc are "created" by the "character creator tool" or by the game?
Another question is about the
movements of the characters. The
movements like walk, jump, sit, etc
are "created" by the "character
creator tool" or by the game?
Animation in various forms, key frame, skeletal and so forth are created in the 3D modelling software.
The game then plays these animations are certain points. For example, pressing jump will play the jump animation. Games often use a form of linear interpolation to blend different animations together to smooth them.
Consider a football game, you can animate the footballer running in eight different directions, but what if the player suddenly changes direction midflow? The modeller could not account for this, therefore the engine will "blur" the difference between the animations together to provide a smooth transition via linear interpolation or some other blending factor.
Software
As for software, free editors such as Blender will do. However I prefer Maya/Max. Often you can gain student editions of these, check their official websites. I got a free six month version via my university. While you legally cannot use the models in commerical games, for learning purposes it is fine. I believe they used to offer a Personal Learning Edition but this no longer exists as far as my searching has found.
Most 3D game objects are created in 3D software, such as Maya and Blender. But there are indeed applications that speed up the character modeling, such as Poser. If you quickly need a low poly mesh without big bucks and a lot of exporters, try MilkShape 3D. Its cheap and it's easy to work with. You can easily build meshes with joint animations, which you can edit later to fine tune your characters.
EDIT: Another question is about the
movements of the characters. The
movements like walk, jump, sit, etc
are "created" by the "character
creator tool" or by the game?
Poser 3D. It's not free, but it comes with a good library for starters. Also you might going to like DAZ 3D, also a commercial product. Personally I am not excited about most 3D modeling software that comes for free, exceptions are Blender and Anim8tor. If you are not that well tuned into modeling professionally, I would still recommend you to go for MilkShape 3D. It has an really easy learning curve and you can pop in and work out quickly just to test and work out your game (there is more inside a game than models). Eventually, you could fine tune all models in software you prefer later.
The Xsi Mod Tool will allow you to do character modelling and animation and is a (slightly) cut down version of the full Xsi tool.
It's free for non-commercial use and has close integration with XNA plus it has plugins that support the Unreal Engine and CryEngine etc
Available here
If you want, you could try using XBL Avatars; the bonus is that the players will actually get to use their avatar ingame, and AFAIK, you can procedurally generate characters and stuff through a code API.
I strongly recomment Blender. It's free, it has tons of robust features, and it's widely used by the XNA community, myself included.
It can be a bit time-consuming to learn how to use it, but once you master the basics, Blender feels like a pencil on paper. (Or, for those of us who suck at drawing, a really good artist that can read your mind :P )
There's also a script called MakeHuman that allows you to parametrically create human models, and I think it works pretty well, myself.

Resources