Does Intel Atom support OpenVG? - openvg

Some time ago, I read some rumors on hardware implementation of OpenVG in Intel Atoms of a "new ganeration". Now I cannot find any evidence. So, is there at least some plans to support OpenVG at all?

The answer: Yes.
Announced Intel Atom Z6xx is a SoC (System-on-a-Chip), that includes GMA 600 graphical core. GMA 600 is able to accelerate OpenVG as well as OpenGL. I'm not sure this acceleration makes any sense, but it is supported.

Related

Metal 2 API features on older devices

According to documentation(https://developer.apple.com/documentation/metal/gpu_features/understanding_gpu_family_4) "On A7- through A10-based devices, Metal doesn't explicitly describe this tile-based architecture". In the same article I seen "Metal 2 on the A11 GPU" and get confused because not found any more info about Metal 2 support in metal shading language specification. For example I found table "Attributes for fragment function tile input arguments" and note "iOS: attributes in Table 5.5 are supported since Metal 2.0."
Is Metal 2 support specific for gpu family?
Not all features are supported by all devices. Newer devices generally support more features, older devices might not support newer features.
There are several factors of this support.
First, each MTLDevice has a set of MTLGPUFamily it supports that you can query with supportsFamily method. Some documentation articles mention what kind of family the device needs to support to use this or that feature, but generally, you can find that info in the Metal Feature Set Tables. The support for the families may vary depending on the chip itself, how much memory or some other units is available to it. And the chips are grouped into families based on those.
There are other supports* queries on an MTLDevice though, that don't depend on the family of the device, but rather on a device itself. Like, for example, supportsRaytracing query. These are also based on the GPU itself, but are separate probably because they don't fall neatly into any of the "families".
Third kind of support is based on an OS version. Newer versions of OS might ship new APIs or an extensions to existing APIs. Those are marked with API_AVAILABLE macroses in the headers and may only be used on the OSes that are the same version or higher. To query support for these ones, you need to use either macroses or if #available syntax in Objective-C or similar syntax in Swift. Here, the API availability isn't so much affected by the GPU itself, but rather by having newer OS and drivers to go with it.
Last kind of "support" to limit some features is the Metal Shading Language version. It's tied to the OS version, and it refers to those notes in the Metal Shading Language specification you mentioned in your question. Here, the availability of the features is a mix of limitations of a compiler version (not everyone is going to use latest and greatest spec, I think most production game engines are using something like Metal 2.1, at least the games that aren't using latest and greatest game engine versions do) and the device limitations. For example, tile shaders are limited to a version of a compiler, but also they are limited to Apple Silicon GPUs.
So there are different types of support at play when you are using Metal in your application and it's easy to confuse them, but it's important to know each one.

Intel i5 memory consistency model?

How to check which memory consistency model does Intel i5 have? I have been searching for Macs and Intel, and it seems impossible to find. Any tips on how to search for this information?
Memory ordering rules for different Intel processors are now described in the Intel SDM, volume 3A, chapter 8, section 8.2 "Memory Ordering". There used to be an official whitepaper on the subject, now only available from non-oficial sources.
Note that information published in different revisions of the SDM from 2006 and later had been changing. An overview of what was stated by x86 memory ordering independently by Intel and AMD can be found here.

Mechanism like CUDA streams in Xeon Phi?

I am new to work with Xeon Phi Co-processor and my question is:
Does exists a mechanism like CUDA streams in Xeon Phi ???
That's right, hStreams essentially covers the key features of CUDA Streams and OpenCL, in that several CUDA Streams and OpenCL apps have been ported to hStreams. Users of hStreams, like the OmpSs folks at Barcelona Supercomputing assessed that hStreams was easier to use than CUDA Streams, and offered better support for synchronization, required fewer unique APIs, and fewer lines of code.
For some more documentation, please see http://lotsofcores.com/hStreams, which you can also find a link of where to download MPSS and a blog that offers a few highlights of its features, including hStreams.
Once you've installed hStreams, look in /usr/share/doc/hStreams.
Yes. The Intel Manycore Platform Software Stack (MPSS) provides hStreams, which are designed to be similar to the CUDA streams model.
There is a chapter in High Performance Parallel Programming Pearls II on hStreams, which you can preview in Google Books.
I can't find any detailed documentation on Intel's website, but the release notes say that you can find PDFs in the MPSS distribution, which should be on any Intel Xeon Phi coprocessor system.
BSC has detailed documentation of hStreams here.

General GPU programming on iPhone [duplicate]

With the push towards multimedia enabled mobile devices this seems like a logical way to boost performance on these platforms, while keeping general purpose software power efficient. I've been interested in the IPad hardware as a developement platform for UI and data display / entry usage. But am curious of how much processing capability the device itself is capable of. OpenCL would make it a JUICY hardware platform to develop on, even though the licensing seems like it kinda stinks.
OpenCL is not yet part of iOS.
However, the newer iPhones, iPod touches, and the iPad all have GPUs that support OpenGL ES 2.0. 2.0 lets you create your own programmable shaders to run on the GPU, which would let you do high-performance parallel calculations. While not as elegant as OpenCL, you might be able to solve many of the same problems.
Additionally, iOS 4.0 brought with it the Accelerate framework which gives you access to many common vector-based operations for high-performance computing on the CPU. See Session 202 - The Accelerate framework for iPhone OS in the WWDC 2010 videos for more on this.
Caution! This question is ranked as 2nd result by google. However most answers here (including mine) are out-of-date. People interested in OpenCL on iOS should visit more update-to-date entries like this -- https://stackoverflow.com/a/18847804/443016.
http://www.macrumors.com/2011/01/14/ios-4-3-beta-hints-at-opencl-capable-sgx543-gpu-in-future-devices/
iPad2's GPU, PowerVR SGX543 is capable of OpenCL.
Let's wait and see which iOS release will bring OpenCL APIs to us.:)
Following from nacho4d:
There is indeed an OpenCL.framework in iOS5s private frameworks directory, so I would suppose iOS6 is the one to watch for OpenCL.
Actually, I've seen it in OpenGL-related crash logs for my iPad 1, although that could just be CPU (implementing parts of the graphics stack perhaps, like on OSX).
You can compile and run OpenCL code on iOS using the private OpenCL framework, but you probably don't get a project into the App Store (Apple doesn't want you to use private frameworks).
Here is how to do it:
https://github.com/linusyang/opencl-test-ios
OpenCL ? No yet.
A good way of guessing next Public Frameworks in iOSs is by looking at Private Frameworks Directory.
If you see there what you are looking for, then there are chances.
If not, then wait for the next release and look again in the Private stuff.
I guess CoreImage is coming first because OpenCL is too low level ;)
Anyway, this is just a guess

Can DirectCompute really be used on a DX10.1 GPU?

Are there any limitations with using DirectCompute on DX10.1 GPUs? I will do most of my development on a DX11 desktop, but I'd like to demo code on a DX10.1 laptop. It'll be a Macbook Pro running Win7 in Bootcamp. The GPU is an Nvidia 330M. What limitations can I expect?
Edit: I found a page about using Compute Shaders on DX10, but it's not entirely clear to me if these are serious limitations or not.
Edit 2: My goal is to learn a bit about quantitative finance and solving PDEs.
Frankly I think CS 4.x is rather limitating because of the lack of atomics, double precision, restrictions for accessing groupshared memory, as well as the 16KB limit. Also you can have only one UAV that can be bound.
I believe most of DirectCompute developers will use CS 4.x for post-processing in games or so (probably with both CS 4.x and CS 5.0 code path). People that want to do heavy GPGPU work will learn with CS 4.x then later move on CS 5.0.
Now you're saying you haven't a clue of the CS 4.x limitations. I suggest to go with CS 4.x and stick to it for now.
But really it all depends what you are developing, how and your target audience (professional developer vs hobby coder, shipping your application now vs in two years, mainstream audience vs pro market etc).
I can't tell you if the limitations are serious or not, as 1) it depends on what you're trying to achieve, and 2) I simply don't know enough about the compute shader.
However, you can run the DirectX Caps Viewer to see what features your device will support (or what limitations you can expect). Also, AFAIK other than the limitations highlighted in the link you posted, you will only be able to use CS 4.0, not the new features in CS 5.0.

Resources