Are there any limitations with using DirectCompute on DX10.1 GPUs? I will do most of my development on a DX11 desktop, but I'd like to demo code on a DX10.1 laptop. It'll be a Macbook Pro running Win7 in Bootcamp. The GPU is an Nvidia 330M. What limitations can I expect?
Edit: I found a page about using Compute Shaders on DX10, but it's not entirely clear to me if these are serious limitations or not.
Edit 2: My goal is to learn a bit about quantitative finance and solving PDEs.
Frankly I think CS 4.x is rather limitating because of the lack of atomics, double precision, restrictions for accessing groupshared memory, as well as the 16KB limit. Also you can have only one UAV that can be bound.
I believe most of DirectCompute developers will use CS 4.x for post-processing in games or so (probably with both CS 4.x and CS 5.0 code path). People that want to do heavy GPGPU work will learn with CS 4.x then later move on CS 5.0.
Now you're saying you haven't a clue of the CS 4.x limitations. I suggest to go with CS 4.x and stick to it for now.
But really it all depends what you are developing, how and your target audience (professional developer vs hobby coder, shipping your application now vs in two years, mainstream audience vs pro market etc).
I can't tell you if the limitations are serious or not, as 1) it depends on what you're trying to achieve, and 2) I simply don't know enough about the compute shader.
However, you can run the DirectX Caps Viewer to see what features your device will support (or what limitations you can expect). Also, AFAIK other than the limitations highlighted in the link you posted, you will only be able to use CS 4.0, not the new features in CS 5.0.
Related
According to documentation(https://developer.apple.com/documentation/metal/gpu_features/understanding_gpu_family_4) "On A7- through A10-based devices, Metal doesn't explicitly describe this tile-based architecture". In the same article I seen "Metal 2 on the A11 GPU" and get confused because not found any more info about Metal 2 support in metal shading language specification. For example I found table "Attributes for fragment function tile input arguments" and note "iOS: attributes in Table 5.5 are supported since Metal 2.0."
Is Metal 2 support specific for gpu family?
Not all features are supported by all devices. Newer devices generally support more features, older devices might not support newer features.
There are several factors of this support.
First, each MTLDevice has a set of MTLGPUFamily it supports that you can query with supportsFamily method. Some documentation articles mention what kind of family the device needs to support to use this or that feature, but generally, you can find that info in the Metal Feature Set Tables. The support for the families may vary depending on the chip itself, how much memory or some other units is available to it. And the chips are grouped into families based on those.
There are other supports* queries on an MTLDevice though, that don't depend on the family of the device, but rather on a device itself. Like, for example, supportsRaytracing query. These are also based on the GPU itself, but are separate probably because they don't fall neatly into any of the "families".
Third kind of support is based on an OS version. Newer versions of OS might ship new APIs or an extensions to existing APIs. Those are marked with API_AVAILABLE macroses in the headers and may only be used on the OSes that are the same version or higher. To query support for these ones, you need to use either macroses or if #available syntax in Objective-C or similar syntax in Swift. Here, the API availability isn't so much affected by the GPU itself, but rather by having newer OS and drivers to go with it.
Last kind of "support" to limit some features is the Metal Shading Language version. It's tied to the OS version, and it refers to those notes in the Metal Shading Language specification you mentioned in your question. Here, the availability of the features is a mix of limitations of a compiler version (not everyone is going to use latest and greatest spec, I think most production game engines are using something like Metal 2.1, at least the games that aren't using latest and greatest game engine versions do) and the device limitations. For example, tile shaders are limited to a version of a compiler, but also they are limited to Apple Silicon GPUs.
So there are different types of support at play when you are using Metal in your application and it's easy to confuse them, but it's important to know each one.
With the push towards multimedia enabled mobile devices this seems like a logical way to boost performance on these platforms, while keeping general purpose software power efficient. I've been interested in the IPad hardware as a developement platform for UI and data display / entry usage. But am curious of how much processing capability the device itself is capable of. OpenCL would make it a JUICY hardware platform to develop on, even though the licensing seems like it kinda stinks.
OpenCL is not yet part of iOS.
However, the newer iPhones, iPod touches, and the iPad all have GPUs that support OpenGL ES 2.0. 2.0 lets you create your own programmable shaders to run on the GPU, which would let you do high-performance parallel calculations. While not as elegant as OpenCL, you might be able to solve many of the same problems.
Additionally, iOS 4.0 brought with it the Accelerate framework which gives you access to many common vector-based operations for high-performance computing on the CPU. See Session 202 - The Accelerate framework for iPhone OS in the WWDC 2010 videos for more on this.
Caution! This question is ranked as 2nd result by google. However most answers here (including mine) are out-of-date. People interested in OpenCL on iOS should visit more update-to-date entries like this -- https://stackoverflow.com/a/18847804/443016.
http://www.macrumors.com/2011/01/14/ios-4-3-beta-hints-at-opencl-capable-sgx543-gpu-in-future-devices/
iPad2's GPU, PowerVR SGX543 is capable of OpenCL.
Let's wait and see which iOS release will bring OpenCL APIs to us.:)
Following from nacho4d:
There is indeed an OpenCL.framework in iOS5s private frameworks directory, so I would suppose iOS6 is the one to watch for OpenCL.
Actually, I've seen it in OpenGL-related crash logs for my iPad 1, although that could just be CPU (implementing parts of the graphics stack perhaps, like on OSX).
You can compile and run OpenCL code on iOS using the private OpenCL framework, but you probably don't get a project into the App Store (Apple doesn't want you to use private frameworks).
Here is how to do it:
https://github.com/linusyang/opencl-test-ios
OpenCL ? No yet.
A good way of guessing next Public Frameworks in iOSs is by looking at Private Frameworks Directory.
If you see there what you are looking for, then there are chances.
If not, then wait for the next release and look again in the Private stuff.
I guess CoreImage is coming first because OpenCL is too low level ;)
Anyway, this is just a guess
I am really interested in using Unity3d to develop an app.
I like the fact that I can develop once and port the app to multiple platforms (Mac/Windows/iPhone/Android), and the performance on my Mac seems to be quite good.
This will be the first time I write an app for iPhone, and I am curious about performance issues down the road. I think I will definitely use Unity3d on iPhone for a prototype, but am wondering if building an iPhone Unity3d app will use the iPhone resources as efficiently as a native app written in Objective-C.
The Unity3d site seems to suggest that Unity3d algorithms are optimized, and I thought that if I asked that question in the Unity3d forums, that would be the kind of response I would get. Ideally, I'd be interested in hearing from someone who has built an app in Unity3d and Objective-C and can compare the two.
The discussion that got me thinking about this was Andrew and Peter Mortensen's response to a question about iOS development cost, which begins "There is a much easier way to develop iPhone apps than learning Cocoa."
There are specific resources in Unity that will help with mobile development including resources, shaders, etc. that are specifically designed with mobile in mind.
You certainly won't want to take 'unoptimized' PC-quality assets and drop them into a Unity project and export that for the iOS platform as you will ensure poor/unreliable performance. What you want to do is start building out a scene using assets of similar quality to those you want for your game and then seeing what the performance is on a real device. This will give you a feel for what level of performance you can expect from your game in production.
Remember that the performance of a iPhone, iPad, iPad2, etc will vary wildly depending on what you're doing and which features you're touching. While Unity3D has been heavily optimized to deal with a variety of the scenarios, you can certainly do things like fogging which push the fill rate (a known limitation of the platform) and end up with horrendous performance.
Could you possibly get more performance out of building your application purely in Objective-C? If you have the right skillset in engine development to design a specific implementation of technology for your specific requirements - Certainly.
You just need to decide if you want to spend your time writing technology or building product. Most people who choose Unity do so because you get an exceptionally good engine which most people cannot beat the performance of (try building your own landscape engine), while at the same time getting exceptional time to market... and really its time to market that really matters in most cases.
This is an old post, but I figured I'd answer it because no one here has really got it quite right.
It's important to keep in mind that the internal core workings of Unity is entirely native. The physics engine and resultantly everything dealing with collision. The occlusion system (umbra). The entire rendering engine core. All of that is written in C/C++ and runs at full native speed on any platform. What AmitApollo says is not correct at all, the unreal engine 3 is not more direct 'native' at all when compared to unity. Both Unity and Unreal engine 3, as well as any other 3D engine like Ogre or cocos3d, their core rendering system is all written in c/c++. Some of these engines may have certain internal rendering algorithms implemented better than others, and may thus produce better performance, but this has nothing to do with whether or not they are 'native', because the internal core rendering system is native for all of them.
The internal workings of the physics engine is written in c/c++ as well, and thus the physics engine in UE3 and Unity both run at 'full native speed'.
The epicCitadel demo also does not show greater technical prowess or performance than Unity on iOS. Much of the 'visual impact' of the citadel demo comes simply from the fact that it is really good artwork. The citadel demo is not pushing any higher vertex count than what Unity could handle on iOS, the citadel demo is not demonstrating any more advanced shader or lighting techniques than what Unity can do on iOS. In fact there are more examples of Unity showing off more advanced mobile rendering techniques than what Unreal Engine 3 has demonstrated. Look at games like Shadowgun or BladeSlinger made in Unity, both these games demonstrate more advanced mobile rendering techniques than what Unreal Engine 3 has shown. Light Probes, Mobile BRDF shaders with translucency and normal mapping and well implemented dynamic mobile shadows to name a few. The vast majority of the most successful 3D games in the App store are Unity games, and Unity has thus put alot of R&D into Unity's mobile rendering performance and capabilities.
Now Unity is scripted in C# and Mono. Which does run slower than native code, about 50% slower on iOS by most estimates. But you must keep in mind that you are only doing game logic in this code. You are not writing any code in C# and Mono in Unity that deals with the working of it's internal rendering system, nor the internal workings of the physics system. You only write game logic in C#, that then interfaces with the rendering and physics core, which then executes at full native speed. Mono C# does execute slower than native C++, but if you program intelligently, I think you will find this is hardly a hindrance at all because you only do game logic in Mono C# and game logic is not necessarily CPU heavy. In my experience, it is really quite difficult to make an iPad 2 drop below 60 fps on purely game logic written in Mono C#. I have never actually been hindered by this at all.
If we are to compare to Unreal Engine 3, keep in mind that UE3 also is set up to have it's game logic programmed in a non-native language, UnrealScript. Unrealscript is a language much like Mono C# or Java, where UnrealScript is compiled down to byte code then interpreted at runtime. Meaning, just like Unity, game logic is not 'native' in UE3.
Now if you look here:
http://lua-users.org/wiki/LuaVersusUnrealScript
That is a benchmark comparing UnrealScript to C++ on a simple arithmetic operation using ints. It shows that unreal script is 1/4 to 1/20th the speed of C++.
Then have a look here:
http://www.codeproject.com/Articles/212856/Head-to-head-benchmark-Csharp-vs-NET
If you scroll down to the C# vs C++ Simple arithmetic benchmark. It shows Mono C# is 3/4 the speed of C++ doing simple int arithmetic. Mono C# is about 1/2 the speed when doing simple arithmetic with floats. The int64 and double benchmarks don't really mean much to us because typically you'll never use those in performance critical code in iOS game logic.
Now other benchmarks on there do show Mono C# at times having as bad as 1/20th the performance of C++. But these are through very specific tests, really the best apples to apples benchmark I could find are those simple arithmetic tests.
So really, since Unity's scripting runs on Mono C# and UE3 runs on UnrealScript. Unity is actually the engine that will offer you radically better performance in game logic code.
The notion that UE3 is any more advanced, or offers any more performance, or any greater graphical capability than Unity on iOS is simply not true. Quite the contrary is true.
Now it is true that if you used something like cocos3d you could potentially get better performance because your game logic could be written natively in C++ as well. But the benefits of working with a scripting language like c# to do game logic I think far outweighs the performance loss that is generally never an issue. Namely the benefits of using a scripting language for game logic is that it offers you faster iterations of design, which when doing games is really critical due to how quirky things can be and how frequently you have to recompile and test code.
However, in Unity, it is really easy to write native code plugins with the Pro version. So if you ever do have a piece of performance critical code that needs to run at native speed, you can write it in C++, compile it to a native library, then call that from Mono C#.
Also keep in mind if you are targeting all iOS devices the difference for heavy GPU graphics means drastic performance discrepancies between iPhone 3GS to 4, then from 4,4S to iPad2,& 3 Even certain games on the new iPhone5 or iPad4 could run at a higher FPS than it's predecessors. Keep in mind to keep poly's low, and of course in your terrain keep resolutions low, and even something as subtle as pixel error could drastically effect. Fog will always produce a strain. Textures > 512x512 may cause a problem, same with multiple light sources. It's much faster to have no light rendering, and bake the shadows and highlights. I also found that running in Native Resolution as opposed to best performance may hinder performance (Unity 4). Billboarding, Occlusion Culling are also topics you want to lookup. There is a fine line between looking good, and running slowly.
If performance is an issue to you, you may want a different engine altogether. A more Direct "native" engine like Unreal Engine 3 is amazing with it's capabilities. And it can do it without much overhead. Case and point, Epic Citadel Demo App running on an iPhone 4 or 3GS. Something comparable in Unity would be slow, and wouldn't quite look as sexy.
Perhaps its a good idea to take a look at other games made with Unity and see where yours fits in and what kind of performance you can expect.
http://www.youtube.com/watch?v=7aVttB25oPo
http://unity3d.com/gallery/game-list/
One asset that is helpful to increase performance on IOS is KGFSkyBox.
We found out, that unity3d skyboxes are using up to 6 drawcalls! This is guite a problem on devices having limits of max 30DCs!
We solved this by implementing KGFSkyBox which reduces the drawcalls to 1 if you have terrain (Hides bottom sky hemisphere). If you do not use terrain KGFSkyBox will render using 2 drawcalls which is still better than 6!
Check it out here:
http://u3d.as/4Wg
If you have any questions or suggestions just contact us here: support#kolmich.at
So I am planning to start learning DirectX by grabbing Frank Luna's book "Introduction to 3D Game Programming with DirectX 10". But since I have a GeForce Go 7 graphics card, I am wondering will I be able to at least test the code from the book? Or should I take his older book about DirectX 9, which my GPU supports? But speaking about that, it would be a little pity to learn outdated stuff since I read that DX10 has introduced quite a lot of new concept, so I am totally confused at the moment.
On the other hand, perhaps with the hardware I have I would be more happier learning older version of OpenGL?
If you don't have the money to get updated hardware, get the book to match the hardware you have.
Most of what you will need to learn transfers from one to version to the other - if you get proficient with one version, you can move to another and keep most of your knowledge - because most of what you need for graphics programming isn't the API.
You just need to get started and get some code running as quick as possible!
If you have Windows 7 you could use WARP until you can get a hold of better hardware, but it will be much slower than using an actual DirectX 10 graphics card.
Every indication I have, based on my experience in embedded computing is that doing something like this would require expensive equipment to get access to the platform (ICE debuggers, JTAG probes, I2C programmers, etc, etc), but I've always wondered if some ambitious hacker out there has found a way to load native code on a Blackberry device. Anyone?
Edit: I'm aware of the published SDK and it's attendant restrictions. I'm curious if anyone has attempted to get around them, and if so, how far they got.
I've seen this question pop up in a number of different forums over time. The original Blackberries were programmable in C++ but I think that RIM ran up against the problems of trying to implement a secure platform in the C/C++ compile to native paradigm.
The devices do have JTAG ports, but unless one could get hands on the RIM code as a place to start the problem is enormous.
I also have to wonder how useful a Blackberry with a replacement FOSS operating system would be, since it would not likely have the protocols to connect to BES or BIS, send PIN's etc. If one was simply looking for a the power of the hand held computing platform I suspect there are many more likely candidates available.
No, C++ is no longer a supported RIM development tool, as they phased it out a number of years ago. Client applications can be developed in Java (or one of a few 5GL frameworks), and web + sever-side apps can be developed using standard tools.
For those looking for updated information, the new Playbook os, also known as QNX, also known as Blackberry 10 (or it will be when the phones running it come out) is in fact c/c++ based, also using QML and a C++ add on called Cascades.
Unfortunately the official SDK website only seems to mention Java. According to wikipedia, different versions of the BlackBerry use different processors. Combined with the fact that RIM uses a proprietary operating system for the devices, it becomes pretty difficult to develop native code without official tools. There is also a partial API-level security restriction which would further prohibit advanced tinkering.
Just randomly searching for an answer to this and came across http://supportforums.blackberry.com/t5/Tablet-OS-SDK-for-Adobe-AIR/Native-C-C-SDK/td-p/778009 which mentions that BB intend to release a C/C++ SDK soon, more details will be provided at the 2011 Game Developer Conference.