NAND ECC sector size - memory

Recently I have been studying the basics of Hamming, R-S and BCH ECC schemes for NAND flash.
According to this source (at the bottom of the page), for BCH, you could have a sector size of 512B or 1024B. The number of parity bits depends on the sector size.
My question -- When dealing with ECC, is the sector size (512B or 1024B) "user selectable"? That is, selectable by the software programmer? Or is this something set in stone by the NAND vendor that you must comply with?

Yes, ECC sector size is user selectable. Here 'the user' is one who has full control over 'the programmer' you mentioned.
Selection of ECC page size is not restricted from the NAND flash perspective. But, it always depends on ‘the programmer’ which is going to communicate with NAND to put/read data. The ‘programmer’ can be a micro controller in an embedded device(like cellphone, Set-top Box etc.), a PC based NAND bulk programmer device etc. With respect to the design/design restriction of ‘the programmer’, it may want to select any one of the ECC page size.
NAND flsh does not restricts the aspects like, the format/size/orentation of ECC data to page data.

I found a very good resource about How to use NAND flash related to STMicroelectronics products that can give you some hints.
For example in the above mentioned NAND flash:
The BCH Controller uses ECC sectors of 1024 bytes, which means Small Page NAND (512B) is not supported.
Have also a look to these tools created for interacting with NAND devices.

Related

NAND Flash interfacing ARM9 MTD partition not getting generated

I am developing an application on ARM9 based board using UBUNTU 10.04 and GCC as a compiler.
Previously I have interfaced the NAND flash from STMicroelectronics ( NAND512W3A25NB ). It is of 64MByte. It has a pagesize of 512Kbit.
With this NAND my application is working very fine.
Due to some upgradation of the memory requirement I need to switch to a bigger NAND flash memory which is from Micron ( MT29F2G08ABAEA ). It is of 256MByte and has a pagesize of 2048Kbit.
With the changes my board is not booting up.
I got the manufacturer ID as well as Chip ID. But MTD partitions are not getting generated.
After some searching I found there is some problem regarding the PAGE_SIZE.
I do not know how to solve this problem as i went through the linux/include/mtd/nand.h it has a MAX_ALLWABLE_PAGE_SIZE is of 8216 and it is also within m requirement, so i can not exactly getting the point that where i am going wrong ??
I use the same chip, Micron MT29F2G08ABAEA, on an IMX25 design. The chain mtd->ubi->ubifs are quite happy with this chip set. Our differences are the NAND flash controllers and their configuration.
The Micron chip has sub-pages and your controller may not support that. Searching through davinci_nand.c, I don't see any sub-page handling.
For the MXC Nand controller, we are using hw_ecc, flash_bbt, and a width of one. The Micron chip is only 8-bit, although there are some 16-bit versions like Micron MT29F2G16ABAEA. Make sure the geometry is correct. I think the Linux MTDs supports several chips in parallel.
It is quick to verify if that part is faster or not with the data sheets. I suspect the ST part is slower than the Micron part and timing is not your issue.
Timing analysis of the Micron MT29F2G08ABAEA indicated that the IMX25 NAND flash controller was actually the bottle neck. The Micron Flash seems quite fast. It is either a bug in the NAND controller or more likely a configuration issue.
Some other information that is helpful (for you or someone to help you),
Some dmesg or console output.
A link to data sheets.
The exact NAND controller used.
The platform data or DT info used.
grep '^[^#].*MTD' .config or MTD related configuration.
I don't think anyone can answer your question out-right, but I am glad to be surprised.

Improve compression ratio with Delphi 6 app that uses the Windows AVIFile functions?

I have a Delphi 6 app that makes movies from an incoming video and audio stream from a robot. The PC receives the video stream as a series of JPEG frames and the audio as blocks of PCM audio data. I am using the Windows AVIFile functions (AVIStreamCreate, etc.) to create the movie. For the choice of video compressor I use the AVISaveOptions() function and let the user select one of the available compressors from those available on their system. For example: Microsoft Video 1, Cinepak Code by Radius, etc. Note several of the other available ones, like Microsoft H.263 or H.261 fail with AVIERR_BADFORMAT errors so I could not test with them. The audio is compressed using the GSM 6.10 compressor.
The problem is I can't seem to get near the compression ratio that I can using a tool like Adobe Premiere for comparison. Note, I am aware that Premiere is compressing using a different overall process than mine, and to a different file format like MPEG, or Quicktime, etc. But I would like to get a comparable compression ratio if I can.
No matter which compressor I choose from AVISaveOptions(), and no matter how low I crank the available compression quality settings for the compressor (for example, Temporal Quality Ratio & Compression Quality for Microsoft Video 1), a minutes worth of video always ends up creating an AVI file of approximately 14MB in size. For comparison, the file I can create using Adobe Premiere is less than 1 MB in size and looks about the same visual quality (in other words, good enough for my purposes. I don't care about actual quality loss here.).
If I examine the file output from my usage of the Windows AVI API I see that none of the settings I change with the compressor affect the frame rate. It is always identical to the input frame rate. Now if necessary obviously I can drop frames on the input side, but that would be a bit messy since it is synced to the audio and I'd like to avoid that if I could.
But more importantly is the Data rate. I can never get that below approximately 2.3 kbps no matter how low I crank the compressor settings down. The videos I create with Premiere, and other videos I've played with that have a healthy file size to duration ratio, are all about 1.2 kbps.
Overall the difference in between the file size of my AVI files and the ones I create with Premiere or other people have sent to me that compress well is 10 to 1. Therefore my compression ratio is 10 times worse than other video files, and those other files have no unpleasant difference in their video quality.
What can I do to get a comparable compression ratio?
UPDATE: The reply by David Heffernan contains a fast solution that worked for me. I am highlighting it because it also contains a vital licensing warning too. For those of you, like me, that want to make it as convenient for your users as possible to use the XVid codec, read the article below. It contains instructions on how to re-use a user's compressor choice, along with their chosen compression configuration choices, in future sessions without bothering the user again:
http://msdn.microsoft.com/en-us/magazine/hh580739.aspx
For the curious, the change in size from my previous output AVI file size to the file created using the XVid codec was 12.231 MB to 632 KB and the video quality was more than reasonable.
The truly simple answer is to install the XVID encoder. None of the codecs that are supplied with Windows are fit for your purpose. XVID is both high quality and free.
Regarding distribution and licensing implications, the XVID FAQ has this to say:
Can I distribute Xvid together with my proprietary program?
If your program calls Xvid functionality upon run-time it’s a derived work and hence, the terms of the GPL apply to the work as a whole including your program. So no, you cannot distribute Xvid together with your proprietary program then. If you want to distribute, you’ll have to publish your program under the GPL as well. That also requires e.g. the provision of the full apps source code. Refer to the GPL license text for more information.
We don’t link to Xvid at all, just call through the VfW interface upon run-time – can we distribute with our proprietary software?
No. It doesn’t matter in which way you link to Xvid or what you count as linking and what not. The GPL doesn’t focus on the term ‘linking’ at all but rather requires combined/derived works to be published as a whole under the terms of the GPL. Basically any two (or more) pieces make up a combined work when they are distributed for use in combination. Hence, if your program calls upon Xvid functionality at run-time it would make up a derived work - no matter how you technically implement the calls to Xvid. If you don’t want to publish your program under the GPL then refrain from distributing it in combination with Xvid.
What this means for you is that you could only distribute XVID with your program if your program is also licensed under the GPL. But it is perfectly fine for you to suggest to your users that they obtain XVID for themselves.

Still a future (and a present) for 6502, VIC and SID?

As a derivative of my previous curiosity question I had a followup curiosity. Is there a future and/or an application for the 6502, the VIC and the SID chips ? I know they are still produced, and used. For example, I remember the 6502 makes a perfect controller chip for small appliances. the SID for sure is still present in some "retro" sound synthesizer, although my guess is that it's just emulated. What about the VIC ?
Community wiki question as there's no correct answer.
I would look at 6502.org, including its list of commercial support and list of projects.
For example, I remember the 6502 makes a perfect controller chip for small appliances.
I dunno about the VIC and SID chips (special purpose video / audio chips are different than a CPU), but I don't see any reason to use a 6502. There are tons of cheap low-power microcontrollers (e.g. Microchip PIC, Atmel, TI MSP430, etc) that are readily available, have more CPU horsepower than a 6502, have useful peripherals (ADCs, UARTs, built-in oscillator, etc), and have real-time debugging features. Why use a 30-year-old microcontroller?
I would think their future is limited. I don't know what kind of quantities are still being produced but you have to figure even the 486 is probably being produced in far greater quantities than the 6502. So even though the 486 might be overkill for some applications its availability determines its price thus making it more attractive to device manufacturers.
Then, as you say, the functionality of the 6502, VIC, and SID chips are easily emulated these days--even in software. So that might drive the demand for those chips down since its probably cheaper to emulate.
Cost means it still sells millions of units each year. 6502 is cheapest 8 bit CPU; doesn't have 6 month lead time like Stm8, braindead memory model like pic or 8051 or overpriced like avr, pic, msp430. To go cheaper you have to go 4 bit which is very limited. Admitedly arm chips like stm32f030 are only a few cents more but there is a company called Walmart that asks for products to be as cheap as possible so manufacturers cut cents of costs.

Computer vision application for automotive telematics application

What sort of application can be considered to be the really business winner for automotive telematics applications related to image processing/computer vision ?
here are the criteria :
1. Innovative
2. Social
3. Fun.
Have you read the articles from the DARPA grand challenge winners?
DARPA site
Google Scholar
I believe the "DARPA Grand Challenge" style of automation meets your .1 requirement as there are plenty of innovation on that front.
But I still think that we are a good decade away from a fully autonomous vehicle, even though the technology is almost there. The main reason is that people are still very afraid of relenting control to the computer, even though it might be the safest choice.
The transition will be slow. More and more models will bring small chunks of automation, such as smarter cruise control systems (that's a big winner right now), autonomous parking (in the market for a while now) and anti collision systems.
Which brings us to your .2 and .3
The above mentioned systems are not fun, they are necessary [for increased safety]. Nowadays, Social Media and Fun don't really mix with driving because they distract the driver from its main task. In the future, when you're on the freeway in auto-pilot mode, you will be able to open your laptop and be free to do whatever you want, since computers will be always connected to the internet. So I don't believe the car itself needs to provide you that aspect of entertainment.
What I do believe it's a killer functionality for cars is the enhancement of intelligent comfort systems integrated with biometrics. Nowadays, cars already have things like personal keys that will make it adjust things like seat height and etc according to your preferences, but it would be much nicer if it could automatically identify who is the driver by some biometric feature (iris, etc) and adjust multiple parameters automatically. That's the end of the key. I'm not talking about seat and pedals adjustment, but transmission style (husband likes a more aggressive transmission), performance limiters (daughter cannot exceed 90% of posted limit... the car knows what the limit is according to where it is).
In my opinion, if you implement biometric recognition + autonomous navigation, the possibilities are endless.
Although none of the applications here use computer vision, they are probably the best once out there yet. They have received quite a bit of media hype.

Why byte-addressable memory and not 4-byte-addressable memory?

Why do computers have byte-addressable memory, and not 4-byte-addressable memory (or 8-byte-addressable memory for 64bit)? Yeah, I see how it could be useful sometimes, it just seems inelegant and excessive. Are the advantages substantial, or is it really just because of legacy?
Processors actually do access memory in quantities of 64-bit (x86 did since Pentium or so); 64-bit processors often have a 128-bit bus. Plus, in accessing main memory, you have bursts that fill an entire cache line, which is even larger units of memory.
It's only the addressing that is byte-based; this adds little overhead and is not excessive at all.
Today, you absolutely need byte-based addressing for networking protocols. Implementing TCP with word-based addressing would be difficult: what do you want read() to return if what you received where 17 bytes? Likewise, higher layers are byte-based: HTTP would be fairly difficult to implement if you get a request line like "GET / HTTP/1.0" be presented in units of four bytes. You essentially would have to split the words back into bytes with shift operations and such (which now the processors do in hardware, thanks to byte-based addressing).
Largely historical reasons - it has become the standard that CPUs understand. Here is a good discussion on it:
Generally, a size has to be chosen to
be convenient for both data and
machine instructions. 8 bits (256
values) is enough to accommodate
common characters in English and some
other languages. Designers of 8-bit
processors presumably found that being
able to encode 256 common instructions
as one byte was a "reasonable
tradeoff". And at the time, 8 bits was
also generally enough to encode other
things such as a pixel colour or
screen coordinate. Having a byte size
that is a power of 2 may also have
been felt to be a "neater" design. It
is interesting to note that, for
example, Marxer, E. (1974), Elements
of Data Processing, describes a byte
as being either 6-bit and 8-bit
depending on whether the computer was
of the "octal" or "hexadecimal" type.
Certainly, other sizes were used in the early days.
We needed to settle down on some size for standardization. People chose 8-bit size for the reasons mentioned by Shane above. since then we are stuck with byte addressable memory. now it is impossible to change due to various compatibility issues and the fact that OPCODES are a byte long only. but using a trick, memory is easily made word-addressable to fetch/store data/addresses!

Resources