Like the question says, I am trying to build up a system with DDR4 DRAM with ECC enabled in it. The ECC would be SEC-DED (Single Error Correction - Double Error Detection) and would like to use scrubbing on it.
Is this possible? If so, how? What are the parameters and flags? Please let me know. I am struggling with this.
Related
So I have a GPU memory leak in certain scenarios in my application. However, I am not aware of any detailed memory profiler for the GPU like those for the CPU. Are there anything out there that can achieve this? I am using D3D (since its WPF, there are d3d9, d3d10, d3d11 components...)
Thanks!
Are you using the debug setting in Dx control panel? This helps you dump the id of the leaking allocation. You can then proceed to set a HKLM registry value and break on the leaking allocation, as is explained here:
http://legalizeadulthood.wordpress.com/2009/06/28/direct3d-programming-tip-5-use-the-debug-runtime/
http://www.gamedev.net/topic/313718-tracking-down-a-directx-leak/
You can also try NSight, which you can download for free from NVidia. For Maximus cards there is also a specific GPU Debugger, and otherwise you can use the Graphics Debugger and try to isolate the memory bump there. In the Performance Debugger you can detect both OpenGl and DirectX events, though this is more performance oriented.
Depending on your GPU's vendor (As you have not provided us with the information), here are the possible solutions:
Intel: Use the Intel Media SDK 's GPU Utilization Utility. This comes packaed in the Intel INDE (Integrated Developer Environment).
AMD: CodeXL provides an on-the-fly debugger and an extensive memory profiling tool, and is now provided as part of their GPUOPen initiative.
NVIDIA: Use the Nvidia Visual Profiler (NVVP) combined with traces from Nvidia Nsight, and these utilities are provided with the standard Nvidia CUDA installer.
Notes:
With Nvidia, you must also install the provided GPU driver (~from the CUDA SDK) to enable any form of GPU-based driver profiling and debugging. Take note of the above limitation if you use your development rig for other purposes such as gaming, as the bundled driver is often much, much older than the stock, Game-ready drivers.
Thanks and regards,
Brainiarc7.
I am developing an application on ARM9 based board using UBUNTU 10.04 and GCC as a compiler.
Previously I have interfaced the NAND flash from STMicroelectronics ( NAND512W3A25NB ). It is of 64MByte. It has a pagesize of 512Kbit.
With this NAND my application is working very fine.
Due to some upgradation of the memory requirement I need to switch to a bigger NAND flash memory which is from Micron ( MT29F2G08ABAEA ). It is of 256MByte and has a pagesize of 2048Kbit.
With the changes my board is not booting up.
I got the manufacturer ID as well as Chip ID. But MTD partitions are not getting generated.
After some searching I found there is some problem regarding the PAGE_SIZE.
I do not know how to solve this problem as i went through the linux/include/mtd/nand.h it has a MAX_ALLWABLE_PAGE_SIZE is of 8216 and it is also within m requirement, so i can not exactly getting the point that where i am going wrong ??
I use the same chip, Micron MT29F2G08ABAEA, on an IMX25 design. The chain mtd->ubi->ubifs are quite happy with this chip set. Our differences are the NAND flash controllers and their configuration.
The Micron chip has sub-pages and your controller may not support that. Searching through davinci_nand.c, I don't see any sub-page handling.
For the MXC Nand controller, we are using hw_ecc, flash_bbt, and a width of one. The Micron chip is only 8-bit, although there are some 16-bit versions like Micron MT29F2G16ABAEA. Make sure the geometry is correct. I think the Linux MTDs supports several chips in parallel.
It is quick to verify if that part is faster or not with the data sheets. I suspect the ST part is slower than the Micron part and timing is not your issue.
Timing analysis of the Micron MT29F2G08ABAEA indicated that the IMX25 NAND flash controller was actually the bottle neck. The Micron Flash seems quite fast. It is either a bug in the NAND controller or more likely a configuration issue.
Some other information that is helpful (for you or someone to help you),
Some dmesg or console output.
A link to data sheets.
The exact NAND controller used.
The platform data or DT info used.
grep '^[^#].*MTD' .config or MTD related configuration.
I don't think anyone can answer your question out-right, but I am glad to be surprised.
I was wondering how this was done?
There's a very simple solution. ROM was invented first.
From wikipedia: "The simplest type of solid state ROM is as old as semiconductor technology itself."
Computers as early as the ENIAC used ROM to store functionality. The concept of BIOS - more simply, a bootloader - wasn't necessary until computers became publicly available, by which point ROM had been around for decades.
Eproms existed before magnetic media as far as I know, and eproms were what BIOS was stored in. And still are in more sophisticated form.
In the earliest computers, there was a front panel with toggle switches to enter machine code to get it up and running so it could talk to the magnetic tape or punch cards.
http://en.wikipedia.org/wiki/Front_panel
My application seems to be slow, but in terms of CPU and RAM, it seems that it is OK. So, I want to know how much memory of the graphic card I am using. I've seen some questions about this on SO, but they talk about Linux or NVidia. I would like to have this information for ATI cards on Windows.
Thanks.
How about the OpenGL debugger?
if you use OpenSceneGraph in order to render scene, there is a stats monitor that shows the usage of GPU.
I am looking for some advice on memory usage on mobile devices, BlackBerry in particular. Using some profiling tools we have calculated a working set size in RAM of 525kb. Problem is we don't really know whether this is acceptable or too high ?
Can anyone give any insight into their own experience with memory usage on BlackBerry? What sort of number should we be aiming for?
I am also wondering what sort of things we should be looking out for in particular to reduce memory usage.
512KB is perfectly acceptable on the current generation of BlackBerrys devices. You can take a look at JBenchmark to see the exact JVM heap you can expect for each model, but none of the current devices out there go below 20MB of heap. Most are much larger than that.
On JBenchmark you can choose the device you are interested from a drop down on the right side of the page. Then, navigate to the JVM Tab for the device.
When it comes to reducing memory usage I wouldn't worry about the total bytes used for this application if you are truly inline with 525K, just about how often allocation/reallocation is required. Try to pool/reuse objects as much as possible, avoiding any unneeded allocation. For instance, use the StringBuffer class to concatenate strings instead of operators as multiple String objects will be created for each concatenation using the operator, where a StringBuffer will just put the characters in an array and only expand when needed. Google is a good way to find more tips.
Finally, relying on profiling tools, which the BlackBerry JDE has, is a very important part of understanding exactly how you can optimize heap memory usage.
If I'm not mistaken, Blackberry apps are written in Java... which is a managed environment, which means really the only surefire way to use less memory is to create fewer objects. There's not a whole lot you can do about your working set, I think, since it's managed by the runtime (which is actually probably the point of using Java on devices like this).