How to check which memory consistency model does Intel i5 have? I have been searching for Macs and Intel, and it seems impossible to find. Any tips on how to search for this information?
Memory ordering rules for different Intel processors are now described in the Intel SDM, volume 3A, chapter 8, section 8.2 "Memory Ordering". There used to be an official whitepaper on the subject, now only available from non-oficial sources.
Note that information published in different revisions of the SDM from 2006 and later had been changing. An overview of what was stated by x86 memory ordering independently by Intel and AMD can be found here.
Related
I am having some difficulty finding a library with which to explore machine learning/ai. I have a pair of R9 290x's, and can't seem to find a lib which works well for it.
First I tried array-fire, which has excellent CPU performance, but poor GPU performance for machine learning, as demonstrated on the benchmarks in the machine_learning sample folder.
I looked into rocm and MIOpen, I tried the hip enabled tensorflow but found it is not supported on the 290x generations. I found someone working on llvm-amdgpu suppport for tensorflow as well, but it doesn't look ready yet
I looked into accelerate for haskell, and found an issue regarding the amdgpu backend, but it also looks not ready.
Maybe I haven't been searching broadly enough? But from what I can tell, almost everything runs on cuda, and I can't afford a new GPU for this right now.
At the time you asked the question, AMD did not support Hawaii GPU's with their rocm driver and compute stack.
Since then support has been added for these older GPU's.
AMD has made a tensorflow port which installs and functions the same as CUDA tensorflow (amd's port). However it doesn't support anything older than gfx803 (Fiji, such as R9 Fury).
I have an R9 290 and it is works with the latest rocm drivers from AMD's repo, but not with the AMD tensorflow port. This is the error I get:
2018-08-16 12:10:58.529311: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Ignoring visible gpu device (device: 0, name: Hawaii PRO [Radeon R9 290], pci bus id: 0000:01:00.0) with AMDGPU ISA gfx701. The minimum required AMDGPU ISA is gfx803.
I am new to work with Xeon Phi Co-processor and my question is:
Does exists a mechanism like CUDA streams in Xeon Phi ???
That's right, hStreams essentially covers the key features of CUDA Streams and OpenCL, in that several CUDA Streams and OpenCL apps have been ported to hStreams. Users of hStreams, like the OmpSs folks at Barcelona Supercomputing assessed that hStreams was easier to use than CUDA Streams, and offered better support for synchronization, required fewer unique APIs, and fewer lines of code.
For some more documentation, please see http://lotsofcores.com/hStreams, which you can also find a link of where to download MPSS and a blog that offers a few highlights of its features, including hStreams.
Once you've installed hStreams, look in /usr/share/doc/hStreams.
Yes. The Intel Manycore Platform Software Stack (MPSS) provides hStreams, which are designed to be similar to the CUDA streams model.
There is a chapter in High Performance Parallel Programming Pearls II on hStreams, which you can preview in Google Books.
I can't find any detailed documentation on Intel's website, but the release notes say that you can find PDFs in the MPSS distribution, which should be on any Intel Xeon Phi coprocessor system.
BSC has detailed documentation of hStreams here.
So I have a GPU memory leak in certain scenarios in my application. However, I am not aware of any detailed memory profiler for the GPU like those for the CPU. Are there anything out there that can achieve this? I am using D3D (since its WPF, there are d3d9, d3d10, d3d11 components...)
Thanks!
Are you using the debug setting in Dx control panel? This helps you dump the id of the leaking allocation. You can then proceed to set a HKLM registry value and break on the leaking allocation, as is explained here:
http://legalizeadulthood.wordpress.com/2009/06/28/direct3d-programming-tip-5-use-the-debug-runtime/
http://www.gamedev.net/topic/313718-tracking-down-a-directx-leak/
You can also try NSight, which you can download for free from NVidia. For Maximus cards there is also a specific GPU Debugger, and otherwise you can use the Graphics Debugger and try to isolate the memory bump there. In the Performance Debugger you can detect both OpenGl and DirectX events, though this is more performance oriented.
Depending on your GPU's vendor (As you have not provided us with the information), here are the possible solutions:
Intel: Use the Intel Media SDK 's GPU Utilization Utility. This comes packaed in the Intel INDE (Integrated Developer Environment).
AMD: CodeXL provides an on-the-fly debugger and an extensive memory profiling tool, and is now provided as part of their GPUOPen initiative.
NVIDIA: Use the Nvidia Visual Profiler (NVVP) combined with traces from Nvidia Nsight, and these utilities are provided with the standard Nvidia CUDA installer.
Notes:
With Nvidia, you must also install the provided GPU driver (~from the CUDA SDK) to enable any form of GPU-based driver profiling and debugging. Take note of the above limitation if you use your development rig for other purposes such as gaming, as the bundled driver is often much, much older than the stock, Game-ready drivers.
Thanks and regards,
Brainiarc7.
Where could I find OpenCL SDK for intel core 2 due?
Graphic card: mobile intel (r) series express chipset family.
The current intel OpenCL SDK does not support Core 2 Duo Series CPUs. (See release notes)
If, however, you want to use that kind of CPU for OpenCL (development), you can use the AMD APP SDK. It supports all CPUs with at least SSE 2.x, as can be seen here
Works for me (Core2Duo 6750, Ubuntu)
Some time ago, I read some rumors on hardware implementation of OpenVG in Intel Atoms of a "new ganeration". Now I cannot find any evidence. So, is there at least some plans to support OpenVG at all?
The answer: Yes.
Announced Intel Atom Z6xx is a SoC (System-on-a-Chip), that includes GMA 600 graphical core. GMA 600 is able to accelerate OpenVG as well as OpenGL. I'm not sure this acceleration makes any sense, but it is supported.