Learn DirectX or Game engine? - directx

I started learning DirectX 11 with C++. It's hard, but I think I'm doing good.
I discovered UDK (Unreal game engine), and read that many good games like Mass Effect 1-3 was made in that engine. I consider why I should learn DirectX, when many games are already made in game engines, and it's a lot easier.
What are the pros and cons by learning DirectX?

DirectX gives the capability to push pixels on screen. Anything beyond that - physics, map models, dev tools, file formats, music, AI, networking code - is still your problem. On the other hand, a game engine provides a comprehensive solution for most of the things you will need, but at a cost (technical constraints, learning curve, and often non-trivial amounts of $$).
It really depends on your goals and needs.

If you goal is to learn Graphics programming then you should choose DirectX 11 because it gives you access to low level graphics programming.
On the other hand if you want to jump right to gameplay programming UDK will allow you to step over low level graphics programming and get your hands on gameplay.
If you want to learn physics, audio, networking programming, you should take a look at Ogre3D (it's a graphics engine that as the name suggests handles the graphics, all you have to do is programming physics, gameplay).

Related

Does OpenGL ES 2.0 have a steeper learning curve than Metal?

I'm very interested in 3D graphics and heard many developers raving about Metal.
Can someone who has worked with Metal and OpenGL ES 2.0 comment on how the learning curve compares to OpenGL ES 2.0?
As a beginner who aims to stay loyal to iOS, is Metal easier to learn and master than OpenGL ES 2.0 or is it harder because it is more advanced?
I hope this question will be useful to many as I am trying to figure out where to start.
As a beginner, you might be better served by starting with 3D graphics at a higher level. SceneKit for OS X and iOS lets you describe a 3D scene in terms of its content -- geometry, materials (textures/shading), lights, and cameras -- and load assets created with 3D modeling tools. SceneKit is built on OpenGL (ES), so it uses a lot of the same concepts. As you become familiar with those concepts, you can use SceneKit to work your way into the OpenGL world a bit at a time:
use shader modifiers to write GPU shader code that extends SceneKit's
use custom programs to write complete shaders that replace SceneKit's, or techniques to write shaders that postprocess SceneKit's rendering
create custom geometry from your own vertex data with geometry sources & elements
use a node renderer delegate to write your own OpenGL client code that works within a scene
You'll find more info about all of these by watching the SceneKit videos from WWDC: What's New in SceneKit and Building a Game with SceneKit.
Otherwise... OpenGL (ES) and Metal don't have very different learning curves in and of themselves. In fact, I'd consider Metal more approachable than OpenGL in some ways -- for example, many things you can do in GL have implicit and hard-to-predict performance costs, and the Metal analogues of those tasks are much more clear about their impact on CPU or GPU time is and allow you to decide when expensive work gets done.
On the other hand, Metal is brand new -- there aren't yet a lot of third-party resources to help you learn it. And lot of the hard things about learning 3D graphics are very similar whether you're working in Metal, OpenGL, DirectX, or another platform/API. Once you learn the important stuff -- there are plenty of books and online tutorials for that, but StackOverflow isn't the best way to go looking for them -- getting up to speed with Metal or with OpenGL ES on a specific platform is pretty easy.
Coming from an OpenGL-ES background, I had a good look at the Metal APIs. I believe that the learning curve for Metal is steeper, not because it's a new API, but because it introduces low level constructs which developers previously didn't need to worry about.
If you compare fixed pipeline Open-GL with shader oriented Open GL flavours (On mobile: ES 1.x compared with ES 2.x, 3.x), and finally with Metal, what you have is increasingly powerful, increasingly generic APIs detached from the intuitive constructs (triangles, vertices, lamps) which constitute Open-GL's historical foundation.
Bear in mind that creating a more usable API isn't the main goal of Metal. The goal of this framework is helping developers to get rid of driver overheads.

Learning WebGL and three.js [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm new and starting to learn about 3D computer graphics in web browsers. I'm interested in making 3D games in a browser. For anyone who has learned both WebGL and three.js...
Is knowledge of WebGL required to use three.js?
What are the advantages of using three.js vs. WebGL?
Since you have big ambitions, you have to invest the time to learn the fundamentals. It is not a matter of what you learn first -- you can learn them simultaneously if you want to. (That's what I did.)
This means that you need to understand:
WebGL concepts
Three.js
The underlying mathematical concepts
Three.js. Three.js does an excellent job of abstracting away many of the details of WebGL, so personally, I'd suggest using Three.js for your project. But remember, Three.js is in alpha, and it is changing frequently, so you have to be prepared for that. Most people learn Three.js by studying the examples. Avoid outdated books and tutorials, and avoid examples from the net that link to old versions of the library.
WebGL. If you use Three.js, you don't need to know how to program in WebGL, you just need to understand the WebGL concepts. That means, that you just need to be able to read someone else's WebGL code and understand what you read. That is a lot easier than being expected to write a WebGL program yourself from scratch. You can learn the WebGL concepts sufficiently well using any of the tutorials on the net, such as the beginner's tutorial at WebGLFundamentals.org and Learning WebGL.
Math. Again, you at least need to understand the concepts. Three good books are:
3D Math Primer for Graphics and Game Development by Fletcher Dunn and Ian Parberry
Essential Mathematics for Games and Interactive Applications: A Programmer’s Guide by James M. Van Verth and Lars M. Bishop
Mathematics for 3D Game Programming and Computer Graphics by Eric Lengyel
There is a very good online course - Interactive 3D Graphics at https://www.udacity.com/course/cs291 on THREE.js. This course includes assignments also to get hands-on experience.
It covers all the basic concepts of Three.js and Computer Graphics
My personal thoughts are the following:
If you have plenty time, you could learn both, but note that WebGL is much lower level than Three.js.
For a first 3d project, experts suggest using a library like Three.js in order to get used to the terms and the general 3d model.
Whichever direction you choose to go, I suggest you learn/polish up on your linear algebra skills. Then go ahead and learn or polish up your understanding about MVP dimensions (Model View Projection). Three.JS can abstract much of that away, but I think it's key that one understands those concepts well before getting serious about any 3D development.
I wrote an introductory article about MVP when I was first learning 3D programming with OpenGL. I realized that until I was able to explain what those transformation matrices are, and how they relate to the various dimensions/spaces, I really didn't know any 3D programming at all, though I could render objects to the screen.
Since your goal is to create games, I think you'll benefit much from learning some raw WebGL first, even if you end up using a framework like Three.js to help you write your code later.
"WebGL is a 2D API and not a 3D API"
http://webglfundamentals.org/webgl/lessons/webgl-2d-vs-3d-library.html
This article describes the fundamental differences between WebGL & 3d libraries like three.js.
Which made my choice between WebGL or Three.js a no-brainer.
I came from a Unity3D background as well as Papervision3D back in the day, so I had a good understanding of how to deal with 3D space. Three.js is the way to go for your initial jump into learning how to deal with WebGL projects. The api is very good, it's very powerful and if you're coming from another 3D technology, you'll be up and running with very little time.
I spent a lot of time with Threejs.org's examples - there's a ton of them and they're very good at getting you off and running in the right direction. The docs are decent enough, especially if you're comparing them to other webGL 3D api's out there.
You might also consider getting the free version of Unity3D and the free collada (was free when I got it) exporter from their app store (Window>App store). I found it easy enough to setup my scene in Unity and export it to Collada for use with Three.js.
Also, I posted this class that I use with Three.js called neo ( http://rockonflash.com/webGL/three/neo.js ). Just add it to your project, then call Neo.JackIntoThree() and it will add the methods/properties to Object3D for use in your project. Things like DrawAllAxis() are invaluable when debugging your scene etc.
Hands down though, Three.js is a great way to go - it's flexible enough to let you write your own shaders/objects etc, and powerful enough right out of the box to help you accomplish your goals.
I picked up three.js, but also jumped into GLSL and experimented a lot with three.js shaderMaterial. One way of going about it - three.js still abstracts much of the stuff for you, but it also gives you a very clean, low level access to all the rendering (projection, animation) capabilities.
This way, you can follow even something like this awesome open-gl tutorial. You don't have to set up the matrices, typed arrays, because three already sets it up for you, updating them when needed. The shader though, you can write from scratch - a simple color rendering would be two lines of GLSL. There is also a post processing plug-in for three.js that sets up all the buffers, full screen quads and stuff you need to do the effects, but the shader can be very simple to begin with.
Since programmable shaders are the essence of modern 3d graphics, i hope my answer is not missing the point :) Sooner or later, anyone who does this needs to at least understand what goes on under the hood, it's the nature of the beast. Also, understanding the 4th dimension in homogeneous space is probably important as well.
This book is good for WebGL.
I just learnt a little of both and I feel that understand the basics of webgl, I think an introduction on webgl is sufficient and then jump into three js. It will be pretty easy once you understand the underlying concepts of WebGL.
Useful links:
Best Intro I have read:
http://dev.opera.com/articles/view/an-introduction-to-webgl/
Comprehensive tutorials:
http://www.johannes-raida.de/tutorials.htm

What to study to get into robotics? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What should someone study at university level if he/she wants to get into robotics and build robotics? So far 'Mechatronics' seems to be the field I'm looking for? I looked at a few plain 'robotics' courses but they seem to be only about the electrical and computer work, and don't include any details on building the mechanical components of robots?
I'm a professional robotics research consultant, with 30 years of experience working for organizations like SRI International and JPL.
Like computers, robotics has quite a strong divide between the software and the hardware. Hardware is further subdivided into actuators and sensors.
If you'd said "I want to get into computers", I would explain that only a few hardware engineers actually design and build physical computers--most researchers assume that the hardware and firmware has been built already, and then they worry about the software--how to make the system actually work.
Similarly with robots, building the hardware is a job for the mechanical engineers (to design the structure and heat dissipation), with little bits and pieces for power electrical engineers (to spec the motors) and computer engineers (to design the firmware silicon). Next-generation robots also use industrial designers (to make the outsides look pretty, and the insides fit well together).
Research areas for actuator design include fingered hands; tentacles; hummingbird and other bird and insect wings; springy wheels; legs; non-electronic designs for high radiation areas; and surgical instruments.
With cameras in every cell phone, vision sensors are mostly a solved problem at this point. Research areas for sensor design include smart flexible tactile skin, brain wave sensors, and other biomedical sensors. There's still some room for good force sensors as well. These fall in the realms of materials engineering, computer engineering, mechanical engineering, and biomedical engineering.
In order to drive the actuators properly so they don't shake themselves apart, you need a control-theory engineer. Start with Fourier transforms so that you can then understand z-transforms. The learning curve on this mathematics is extremely steep, and careers are quite few, so either you have to be born to be a controls engineer or you should let someone else handle these lower-level details for you.
Signal processing, for the medium- and low-level sensor drivers, has been under the domain of the EEs historically. This works its way up to image processing, which falls under computer science, and then image understanding, which is in the A.I. branch of CS.
However, as I mentioned, the hardware, firmware, and drivers are all manufacturing details that you solve once and then sell forever. Anybody can buy a Lego or a Bioloids kit off the shelf now, and start working with motors. It's not like 2006, when the Fujitsu HOAP humanoid robot we were working with at JPL was a $50,000 custom-ordered special.
Most of what I consider the really interesting work starts by assuming the hardware and drivers have already been accomplished--and then, what do you do with the system? This is completely in the realm of software.
Robotic software control starts with 3D simulators, which in turn are based on forward kinematics; eventually inverse kinematics; dynamics, if you feel like it; and physics-engine simulations. Math here centers around locations [position + orientation], which are best represented by using [4x4] homogeneous coordinate transformation matrices. These are not very hard, and you can get a good background in them from any computer graphics textbook. Make sure you follow the religion of post-multiplying by matrices ending in a column vector on the right; this allows you to chain base-to-waist-to-shoulder-to-elbow-to-hand kinematics in a way that you'll be able to understand. Early textbooks proposed premultiplying using row vectors, because they thought it wouldn't make a difference. It does.
Of course the physics engines require a decent knowledge of physics.
Higher-level processing is accomplished using artificial intelligence, usually rules-based systems. Natural-language processing also can tie in linguistics and phonetics. Speech recognition and speech generation are again mostly signal processing, taught in EE and CS.
Recent advances work on Big Data, which uses statistics, Bayesian reasoning, and bases vector spaces (from mathematics).
Robotics has not yet broken out. It is still at the level cell phones were at when Gordon Gecko was walking on the beach talking into a "portable phone" the size of a shoe. I don't see robots becoming ubiquitous before 2020. Around 2025, being a robot programmer will be in demand as much as being an app programmer is today. Study lots of A.I. Start early.
State-of-the-art humanoid robot system design as of 2006 [short movie]:
http://www.seqcon.com/caseJPL.html
Very high level block diagram of components [graphic]:
http://www.seqcon.com/images/SystemSchematic640.gif
I would highly recommend looking into Artificial Intelligence for Robotics on Udacity, it is very interesting course that covers the software and AI part. Also Coursera offers a free online robotics course, and other courses as well that are very relevant and useful to Robotics.
Mechanical and electrical engineering and computer science.
Mechanical engineering will inform choices about servos, linkages, gears, and all other mechanical components.
Control theory is the junction of mechanical and electrical engineering. You'll need that.
So much of control is digital these days, so EE and computer science will be a part of it.
It's a big field. Good luck.
Industrial robotics is usually handeled by Mechanical Engineers, and sport/team robotics by electical engr, electronics engr, or computer science majors. It all depends on what you mean by "robotics". Also, in case nobody else mentions it, a Masters degree is strongly encouranged.
As an added bonus the math used in industrial robotics, is directly linked to math for game development. There isn't really a clear cut line of who is supposed to be doing what in robotics.
Mechtronics is the current field of study for those interested in robotics. It combines mechanical, electrical, controls, and software as relates to robotics.
In the past we came from many different backgrounds, mechanical engineers, electrical, electronics, and software. I am an Application Engineer for robot manufacturer. I started out in Avionics, moved to automated test equipment, then to automated material delivery systems, I became a robotics service technician and manager then moved over to application programing and training.
One final note, be prepared to keep learning. This is a field that is constantly changing and evolving.

Microsoft Robotics Studio, simple simulation

I am soon to start with Microsoft Robotics Studio.
My question is to all the gurus of MSRS, Can simple simulation (as obstacle avoidance and wall following) be done without any hardware ?
Does MSRS have 3-dimensional as well as 2-dimensional rendering? As of now I do not have any hardware and I am only interested in simulation, when I have the robot hardware I may try to interface it!
Sorry for a silly question, I am a MSRS noob, but have previous robotics h/w and s/w experience.
Other than MSRS and Player Project (Player/Stage/Gazebo) is there any other Software to simulate robots, effectively ?
MSRS tackles several key areas. One of them is simulation. The 3D engine is based on the AGeia Physics engine and can simulate not only your robot and its sensors, but a somewhat complex environment.
The demo I saw had a Pioneer with a SICK lidar running around a cluttered appartment living room, with tables, chairs and etc.
The idea is that your code doesn't even need to know if it's running on the simulator or the real robot.
Edit:
A few links as requested:
Start here: http://msdn.microsoft.com/en-us/library/dd939184.aspx
alt text http://i.msdn.microsoft.com/Dd939184.image001(en-us,MSDN.10).jpg
Then go here: http://msdn.microsoft.com/en-us/library/dd939190.aspx
alt text http://i.msdn.microsoft.com/Dd939190.image008(en-us,MSDN.10).jpg
Then take a look at some more samples: http://msdn.microsoft.com/en-us/library/cc998497.aspx
alt text http://i.msdn.microsoft.com/Cc998496.Sumo1(en-us,MSDN.10).jpg
simple answer is yes, MRDS simulator and player/stage have very similar capabilities. MRDS uses a video game quality physics engine under the hood, so you can do collisions, and some basic physics on your robots, but its not going to be the level of accuracy of a matlab simulation (on the flip side its realtime and easier to develop with though). You can do a lot in MRDS without any hardware.
MRDS uses some pretty advanced programming abstractions, so can be a bit intimidating at first, but do the tutorials, and the course that has been posted to codeplex "software engineering for robotics" and you will be fine. http://swrobotics.codeplex.com/

3D Character/Model Creator

I'm in a project to create a 3d game using XNA/C#, and the game will use a lot of 3d characters.
Looking at the current 3d games, in some they create near to hundreds of characters, what lead me to think that there are some good 3d character/model creator.
To narrow the sample, the game will have characters like the game "Grand Chase". There are some good (and easy) character model creator for to use in XNA development? Free is better, of course, but I will get payed versions too.
EDIT: Another question is about the movements of the characters. The movements like walk, jump, sit, etc are "created" by the "character creator tool" or by the game?
Another question is about the
movements of the characters. The
movements like walk, jump, sit, etc
are "created" by the "character
creator tool" or by the game?
Animation in various forms, key frame, skeletal and so forth are created in the 3D modelling software.
The game then plays these animations are certain points. For example, pressing jump will play the jump animation. Games often use a form of linear interpolation to blend different animations together to smooth them.
Consider a football game, you can animate the footballer running in eight different directions, but what if the player suddenly changes direction midflow? The modeller could not account for this, therefore the engine will "blur" the difference between the animations together to provide a smooth transition via linear interpolation or some other blending factor.
Software
As for software, free editors such as Blender will do. However I prefer Maya/Max. Often you can gain student editions of these, check their official websites. I got a free six month version via my university. While you legally cannot use the models in commerical games, for learning purposes it is fine. I believe they used to offer a Personal Learning Edition but this no longer exists as far as my searching has found.
Most 3D game objects are created in 3D software, such as Maya and Blender. But there are indeed applications that speed up the character modeling, such as Poser. If you quickly need a low poly mesh without big bucks and a lot of exporters, try MilkShape 3D. Its cheap and it's easy to work with. You can easily build meshes with joint animations, which you can edit later to fine tune your characters.
EDIT: Another question is about the
movements of the characters. The
movements like walk, jump, sit, etc
are "created" by the "character
creator tool" or by the game?
Poser 3D. It's not free, but it comes with a good library for starters. Also you might going to like DAZ 3D, also a commercial product. Personally I am not excited about most 3D modeling software that comes for free, exceptions are Blender and Anim8tor. If you are not that well tuned into modeling professionally, I would still recommend you to go for MilkShape 3D. It has an really easy learning curve and you can pop in and work out quickly just to test and work out your game (there is more inside a game than models). Eventually, you could fine tune all models in software you prefer later.
The Xsi Mod Tool will allow you to do character modelling and animation and is a (slightly) cut down version of the full Xsi tool.
It's free for non-commercial use and has close integration with XNA plus it has plugins that support the Unreal Engine and CryEngine etc
Available here
If you want, you could try using XBL Avatars; the bonus is that the players will actually get to use their avatar ingame, and AFAIK, you can procedurally generate characters and stuff through a code API.
I strongly recomment Blender. It's free, it has tons of robust features, and it's widely used by the XNA community, myself included.
It can be a bit time-consuming to learn how to use it, but once you master the basics, Blender feels like a pencil on paper. (Or, for those of us who suck at drawing, a really good artist that can read your mind :P )
There's also a script called MakeHuman that allows you to parametrically create human models, and I think it works pretty well, myself.

Resources