I am developing an application on ARM9 based board using UBUNTU 10.04 and GCC as a compiler.
Previously I have interfaced the NAND flash from STMicroelectronics ( NAND512W3A25NB ). It is of 64MByte. It has a pagesize of 512Kbit.
With this NAND my application is working very fine.
Due to some upgradation of the memory requirement I need to switch to a bigger NAND flash memory which is from Micron ( MT29F2G08ABAEA ). It is of 256MByte and has a pagesize of 2048Kbit.
With the changes my board is not booting up.
I got the manufacturer ID as well as Chip ID. But MTD partitions are not getting generated.
After some searching I found there is some problem regarding the PAGE_SIZE.
I do not know how to solve this problem as i went through the linux/include/mtd/nand.h it has a MAX_ALLWABLE_PAGE_SIZE is of 8216 and it is also within m requirement, so i can not exactly getting the point that where i am going wrong ??
I use the same chip, Micron MT29F2G08ABAEA, on an IMX25 design. The chain mtd->ubi->ubifs are quite happy with this chip set. Our differences are the NAND flash controllers and their configuration.
The Micron chip has sub-pages and your controller may not support that. Searching through davinci_nand.c, I don't see any sub-page handling.
For the MXC Nand controller, we are using hw_ecc, flash_bbt, and a width of one. The Micron chip is only 8-bit, although there are some 16-bit versions like Micron MT29F2G16ABAEA. Make sure the geometry is correct. I think the Linux MTDs supports several chips in parallel.
It is quick to verify if that part is faster or not with the data sheets. I suspect the ST part is slower than the Micron part and timing is not your issue.
Timing analysis of the Micron MT29F2G08ABAEA indicated that the IMX25 NAND flash controller was actually the bottle neck. The Micron Flash seems quite fast. It is either a bug in the NAND controller or more likely a configuration issue.
Some other information that is helpful (for you or someone to help you),
Some dmesg or console output.
A link to data sheets.
The exact NAND controller used.
The platform data or DT info used.
grep '^[^#].*MTD' .config or MTD related configuration.
I don't think anyone can answer your question out-right, but I am glad to be surprised.
Related
I am a mathematician and not a programmer, I have a notion on the basics of programming and am a quite advanced power-user both in linux and windows.
I know some C and some python but nothing much.
I would like to make an overlay so that when I start a game it can get info about amd and nvidia GPUs like frame time and FPS because I am quite certain the current system benchmarks use to compare two GPUs is flawed because small instances and scenes that bump up the FPS momentarily (but are totally irrelevant in terms of user experience) result in a higher average FPS number and mislead the market either unintentionally or intentionally (for example, I cant remember the name of the game probably COD there was a highly tessellated entity on the map that wasnt even visible to the player which lead AMD GPUs to seemingly under perform when roaming though that area leading to lower average FPS count)
I have an idea on how to calculate GPU performance in theory but I dont know how to harvest the data from the GPU, Could you refer me to api manuals or references to help me making such an overlay possible?
I would like to study as little as possible (by that I mean I would like to learn what I absolutely have to learn in order to get the job done I dont intent to become a coder).
I thank you in advance.
It is generally what the Vulkan Layer system is for, which allows to intercept API commands and inject your own. But it is nontrivial to code it yourself. Here are some pre-existing open-source options for you:
To get to timing info and draw your custom overlay you can use (and modify) a tool like OCAT. It supports Direct3D 11, Direct3D 12, and Vulkan apps.
To just get the timing (and other interesting info) as CSV you can use a command-line tool like PresentMon. Should work in D3D, and I have been using it with Vulkan apps too and it seems to accept them.
I've an issue with my current micropython project on my ESP8266. I've a 10x10 LED matrix which i would like to control via 4 shift registers.
In general 3 pins are required for the controlling DATA, LATCH and CLOCK. After some hours of internet searching the most promising solution was the usage of SPI, where also found some useful instructions for the pyboard (thank you for the code btw):
https://forum.micropython.org/viewtopic.php?t=1219
I tried to replace the pyboard specific librarys with the general machine module for the ESP8266 to get access to the SPI class. It worked fine till a specific point but the main issue at the moment is that it was not capable provide a binary signal at the DATA pin.
To be honest I'm a little bit confused about the write methods in the machine.SPI class. The docu says the return value is None. So in general what is the purpose of a write method with a return value of None (sry for the maybe silly question)
Is there maybe another solution to get a binary signal out of the data pin? I'm not sure anymore if the usage of SPI is the best way to manage the controlling. Do you have some other examples or tutorials to get deeper into the topic?
Thank you for your kind response in advance,
BR charlzo
I need to know how to get which kind of video card is using in directX, because some features in my program are not supported in amd video card and cause crash.
So, I need to get which card the computer is using(some computer may have more than one video card).
So before you throw ATI/AMD under the bus here, make sure that the problem is not actually due to your application. For Direct3D 10/11, be sure to enable the debug device and ensure you do not have any CORRUPTION or ERRORS, and look at all WARNINGS.
Next, see if there is a newer driver available for the repro case. If there is, then just tell your users to update their drivers. If not, and it seems to be a legitimate crash inside the driver then report that as a bug to ATI/AMD (or NVidia or Intel as the case may be).
Test your app on more than one video card/driver combination from each vendor. For indies this can be challenging, but it's an important part of making sure your application works on a broad set of hardware. For Direct3D 11, you need to try various Direct3D hardware feature level devices to ensure good coverage.
Real games do have some extra warnings tied to detecting specific hardware IDs when dealing with wide-spread driver bugs and unofficial vendor-specific extensions). There is an example of doing this detection here based on the vendorid/deviceid combination in DXGI_ADAPTER_DESC or D3DADAPTER_IDENTIFIER9. Locking out all cards from a specific vendor is overkill and likely to just annoy your customers.
I've inherited an application that uses D3D9 to display graphics full screen on monitor #2. The application works properly on a desktop machine with a GeForce 9500 GT. When I attempt to get the application running on a laptop equipped with onboard Intel HD Graphics, all of the graphics are not displayed. One of the vertex buffers is drawn but the rest are black.
I'm not very familiar with D3D, so I'm not sure where to begin debugging this problem. I've been doing some searching but haven't been able to turn anything up.
Update:
Drawing simple vertex buffers with only 2 triangles works, but anything more complex doesn't.
My gut feeling is likely the supported shader models for the given GPU.
Generally it is good practice to query the gfx card to see what it can support.
There is also a chance it could be specific D3D API functionality - you see this more so with switching between say GeForce & ATI(AMD), but of course also possible with Intel being its own vendor; but I would start by querying supported shaders.
For D3D9 you use IDirect3D9::GetDeviceCaps to query the gfx device.
links:
Post here: https://gamedev.stackexchange.com/questions/22705/how-can-i-check-for-shader-model-3-support
http://msdn.microsoft.com/en-us/library/bb509626%28VS.85%29.aspx
DirectX also offer functionality to create features for a given device level:
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476876%28v=vs.85%29.aspx
Solution #1:
Check every error code for every D3D9 call. Use DXGetErrorString9 and DXGetErrorDescription9 to get human-readable translation of error code. See DirectX documentation for more info. When you finally encounter a call that returns something other thant D3D_OK, investigate DirectX documentation for the call.
Solution #2:
Install debug DirectX drivers (should be included with DirectX SDK), examine debug messages process outputs while being run (messages are being printed using OutputDebugMessage, so you'll only see then in debugger/IDE). With high debug settings, you'll see every single problem in your app.
I want to port a good OpenCV code on an embedded platform. Earlier such stuffs were very difficult to perform but now TI has come up with nice embedded platforms which are comparatively hassle free as they say.
I want to know following things:
Given that :
The OpenCV code is already running on PC smoothly. (obviously)
Need to determine these before purchasing the device.
Can't put the code here in stackoverflow. :P
To chose from Texas Instruments: C6000.
Questions:
How to make it sure that the porting will be done?
What steps to be taken to make it sure that after porting the code, will run (at least).
to determine whether the code might require some changes to make its run smooth.
The point 3 above is optional.
I need info which will at least give me some start up in this regard.
What I thought I should do?
I am to list the inbuilt functions down.
Then to find available online bench marking for those functions for the particular device like as shown towards the end of this doc.
...
Need to know how to proceed further?
However C6-Integra™ DSP+ARM Processor seems the best.
The best you can do is to try a device simulator (if it is available), but what you'll see there is far from perfect.
Actually, nothing can tell you how fast and how well the app will run on the embedded device before running you specific app on that specific device.
So:
Step 1 Buy it
Step 2 Try it
Things to consider:
embedded CPU architecture: Your app needs a big cache? how big is the embedded cache?
algorithm: do you use a lot of floating point operations? how good is the device at floating point ops?
do you have memory transfers? data bus on a PC is waaay faster than on embedded
hardware support: do you use a lot of double-precision calculations? they are emulated on ARMs. They are gonna kill your app (from millisecons on a PC it can go to seconds on a ARM)
Acceleration. Do your functions use SSE? (many OpenCV funcs are SSEd, even if you don't know). Do you have the NEON counterpart? (OpenCV does not have much support for that). The difference can be orders of magnitude from x86 SSE to embedded without NEON.
and many, many others.
So, again: no one can tell you how it will work. Just the combination between the specific app and the real device tells the truth.
even a run on a similar device is not relevant. It can run smoothly on a given processor, and with another, with similar freq or listed memory, it will slow down too much
This is an interesting question but run is a very generic word in this context, therefore I feel the need to break it down to other 2 questions:
Will it compile in an embedded device?
Will it run as fast/smooth as in a PC?
I've used OpenCV in a lot of different devices, including ARM, SH4, MIPS and I found out that sometimes the manufacturer of the device itself provides a compiled version of OpenCV (for my surprise), which is great. That's something you can look into, maybe the manufacturer of your device provide OpenCV binaries.
There's no way to know for sure how smooth your OpenCV application will be on the target device unless you are able to find some benchmark of OpenCV running in there. PCs have far better processing power than embedded devices, so you can expect less performance from the target device.
There are 3rd party applications like opencv-performance, that you can use to test/benchmark the environment once you get your hands on it. And if performance is such a big deal in this project, you might also be interested in this nice article which explain some timing tests done on couple of OpenCV features comparing implementations using the C and C++ interfaces of OpenCV.