I'm developing a C++ application in the ESP32-DevKitC board where I sense acceleration from an accelerometer. The application goal is to store the accelerometer data until storage is full and then send all the data through WiFi and start all again. The micro also goes to deep-sleep mode when is possible.
I'm currently using the ESP32 NVS library which is very well documented and pretty easy to use. The negative side of this is that the library uses Flash memory, therefore a lot of writings will end up degrading the drive.
I know that Espressif also offers some other storage libraries (FAT, SPIFFS, etc.) but, as far as I know (correct me if I'm wrong), they all use Flash drive.
Is there any other possibility of doing what I want to but without using the Flash storage?
Aclarations
Using Flash memory is not the problem itself, but degrading it.
Storage has to be non volatile or at least not being erased when the micro goes to deep-sleep mode.
I'm not using any Arduino library.
That's a great question that I wish more people would ask.
ESP32s use NOR flash storage, which is usually rated for between 10,000 to 100,000 write cycles (100,000 seems to be the standard these days). Flash can't write single bytes; instead of writes a "page" of bytes, which I believe is 256 bytes. So each 256 byte page is rated for at least 100,000 cycles. When a device is rated for 100,000 cycles it's likely to be usable for at least 10 times that, but the manufacturer is not going to make any promises beyond the 100,000.
SPIFFS (and LittleFS, now used on the ESP8266 Arduino Core) perform "wear leveling", to minimize the number of times a particular page is written. So if you modify the same section of a file repeatedly, it will automatically be written to different pages of flash. FAT is not designed to work well with flash storage; I would avoid it.
Whether SPIFFS with wear leveling will be adequate for your needs depends on your needed lifetime of the device versus how much data you'll be writing and how frequently.
NVS may perform some level of wear levelling, to an extent I'm unsure about. Here, in a forum post with 2 ESP employees, they both confirm that NVS does do some form of wear levelling. NVS is best used to persist things like configuration information that doesn't change frequently. It's not a great choice for storing information that's updated often.
You mentioned that the data just needs to survive deep sleep. If that's the case, your best option (if it's large enough) is to use the ESP32's RTC static RAM. This chunk of memory will survive restarts and deep sleep mode, but will lose its state if power is interrupted. It's real RAM so you won't wear it out by writing to it frequently, and it doesn't cost a lot of energy to write to. The catch is there's only 8KB of it.
If the 8KB of RTC RAM isn't enough and you're writing too much data too frequently to trust that SPIFFS will be okay, your best bet would be an SD card. The ESP32 can talk to an SD card adapter. SD cards use NAND flash, which has a much greater lifespan than NOR and can be safely overwritten many more times (which is why these kinds of cards are usable for filesystems in devices like Raspberry Pis).
Writing to flash also takes much more energy than writing to regular RAM. If your device is going to be battery powered, the RTC RAM is also a better choice than SPIFFS or an SD card from a power savings perspective.
Finally, if you use the RTC RAM I'd recommend starting to write it over wifi before it's full, as bringing up wifi and transmitting the data could easily take long enough that you might run out of space for some samples. Using it as a ring buffer and starting the transmit process when you hit a high water mark rather than when the buffer is full would probably be your best bet.
I know i'm late with this answer but you can buy ESP32 modules with external RAM even with 4-8mb. External ram is really fast ( at least much faster than the flash, it uses SPI interface to communicate ) and you can fit a lot of sensor readings in there.
I'm using an ESP32_WROVER_E module with 8mb external ram ( 4mb is usable with normal function calls ) and 16mb flash.
Here is a link of the module that i'm using at TME's site.
Related
I'm using a PIC12F1840 chip with an MPU9250 accelerometer to collect movement data. I'm currently using a 1Kb, SPI RAM chip, but it gets full quite quickly, and there is also data loss while trying to read the data from it (due to the RAM's need for continuous power!).
It doesn't have to be quick, the MPU's fastest speed is the only 184Hz, and I'm planning to use it on a slower setting by default. Can someone suggest some type of memory?
I try to implement ram test such like this url(http://www.esacademy.com/en/library/technical-articles-and-documents/miscellaneous/software-based-memory-testing.html) in dual core microcontroller.
This ram testing shall be available in the middle of another process.
I think implementing this by using interrupt disable, but it is not appropriate.
As a precondition, My implementing ram test is supposed to do data backup to another domain before testing and to put these data back to initial address.
So, other driver can use same data as usual after RAM testing.
In this case I use interrupt disable, it's not available in dual core.
Because the both of cores access the same RAM domain and disabling interrupt
is not working another core's processing, there are only occurring data inconsistency.
Could you give me your idea?
Somewhat by definition if you are running code on that ram you are not testing that ram if you want to do a memory test you need to be off the ram under test.
But that depends on what your definition of test is. If it is a memory test to test the memory itself, cant be on it, you are not testing some of the memory so you are not testing the memory (looks like what your link is about, note links are bad in SO questions and answers, remote links are not assumed to remain active).
Cant test half then another half you are not testing the address bus completely.
If this is a performance test then ideally you want to be off of it and have the test run completely from cache. Multi core helps for a targeted test as you can push the interface a little bit harder, difficult to max it out with a general purpose processor though, multi-core or not.
Otherwise if you just want to exercise a fraction then allocate a fraction and test it, in whatever way you wish. Its not really a memory test though.
Sounds like from your requirements you are not really interested in a full memory test, so do as much as you can to make your boss happy.
Actual memory testing a system is very much specific to that system, how you approach it, how you solve it. You want that code(/stack) to not be on that ram, ideally the chip/system design includes a fast internal SRAM that you can use for board bring up and design verification, possibly manufacture test, but manufacture test should be testing the solder/board not all the bits in the ram, there are ways to do that too. If no internal sram then they had to design some way to bring the system up, or not, if you can run from flash and have the cache on, and can map that out of the way of the DRAM address space, then you can test the dram(/external ram) that way (no stack, just the CPU registers, basically assembly language).
Most available desktop (cheap) x86 platforms now still nave no ECC memory support (Error Checking & Correction). But the rate of memory bit-flip errors is still growing (not the best SO thread, Large scale CERN 2007 study "Data integrity": "Bit Error Rate of 10-12 for their memory modules ... observed error rate is 4 orders of magnitude lower than expected"; 2009 Google's "DRAM Errors in the Wild: A Large-Scale Field Study"). For current hardware with data-intensive load (8 GB/s of reading) this means that single bit flip may occur every minute (10-12 vendors BER from CERN07) or once in two days (10-16 BER from CERN07). Google09 says that there can be up to 25000-75000 one-bit FIT per Mbit (failures in time per billion hours), which is equal to 1 - 5 bit errors per hour for 8GB of RAM ("mean correctable error rates of 2000–6000 per GB per year").
So, I want to know, is it possible to add some kind of software error detection in system-wide manner (check both user and kernel memory). For example, create a patch for Linux kernel and/or to system compiler to add some checksumming of every memory page, and try to detect silent memory corruptions (bit-flips) by regular recomputing of checksums?
For example, can we see all writes to memory (both from user and kernel space), to distinguish between intended memory changes from in-memory bit flips? Or can we somehow instrument all codes with some helper?
I understand that any kind of software memory ECC may cost a lot of performance and will not catch all errors, but I think it can be useful to detect at least some memory bit-flips early, before they will be reused in later computations or stored to hard drive.
I also understand that better way of data protection from memory bitflips is to switch to ECC hardware, but most PC there are still non-ECC.
The thing is, ECC is dirt cheap compared to "software ECC countermeasures". You can easily detect if they have ECC modules and complain (or print a warning) when they don't.
http://www.cyberciti.biz/faq/ecc-memory-modules/
For example, can we see all writes to memory (both from user and kernel space), to distinguish between intended memory changes from in-memory bit flips? Or can we somehow instrument all codes with some helper?
Er, you you will never "see" the bit-flips on the bus. They are literally caused by a particle hitting RAM, flipping a bit. Only much later can you notice that you read out something different than your wrote in. To detect this only via the bus, you would need a duplicate copy of all your RAM (i.e. create a shadow copy of what is in your real RAM, so you can verify every read returns what was written to that location.)
try to detect silent memory corruptions (bit-flips) by regular recomputing of checksums?
The Redis guy has a nice write-up on an algorithm for testing RAM for problems. http://antirez.com/news/43 But this is really looking for RAM errors, not random bit-flips.
If "recompute checksums" only works when you are NOT writing to the memory. That might be "good enough" but you'll need to figure out which pages are not being written to.
To catch 100% of the errors, every write must be pre-ceeded by computing the checksum of that block of memory, then comparing it to the recorded checksum (to make sure that block hasn't degraded in RAM). Only then is it safe to do the write and then update the checksum. As you can imagine, the performance of this will be horrible (at least 100x slower) performance.
I understand that any kind of software memory ECC may cost a lot of performance and will not catch all errors, but I think it can be useful to detect at least some memory bit-flips early, before they will be reused in later computations or stored to hard drive.
Well, there is a simple method to detect 100% of the errors, at a cost of 50% performance: Just run the computation on 2 boxes at once (or on one box at two different times, maybe with a RAM test in between if you are paranoid.) If the results differ, you have detected an error.
See also:
https://www.linuxquestions.org/questions/linux-hardware-18/how-to-detect-ecc-memory-errors-under-linux-886011/
The answer to the question is yes, and a proof for that is the software SoftECC posted in the comments!
Just a note that SoftECC is a kernel level solution. If a user-land app is used, it will be a third stage of redundancy, that seems not necessary.
Is it possible to bounce data back and forwards between lets say a USA computer and an Australian computer through the internet and just send these packets back and forwards and use this bounced data as a data storage?
As I understand it would take some time for the data to go from A to B, lets say 100 milliseconds, then therefore the data in transfer could be considered to be data in storage. If both nodes had a good bandwidth and free bandwidth, could data be stored in this transmission space? - by bounce the data back and forwards in a loop.
Would there be any reasons why this would not work.
The idea comes from a different idea I had some time ago where I thought you could store data in empty space by shooting laser pulse between two satellites a few light minutes apart. In the light minutes of space between then you could store data in this empty space as the transmission of data.
Would there be any reasons why this would not work.
Lost packets. Although some protocols (like TCP) have means to prevent packet loss, it involves the sender re-sending lost packets as needed. That means each node must still keep a copy of the data available to send it again (or the protocol might fail), so you'd still be using local storage while the communication does not complete.
If you took any networking classes, you would know the End-to-End principle, which states
The end-to-end principle states that application-specific functions ought to reside in the end hosts of a network rather than in intermediary nodes
Hence, you can not expect routers between your two hosts to keep the data for you. They have to freedom to discard it at anytime (or they themselves may crash at any time with your data in their buffer).
For more, you can read this wiki link:
End-to-End principle
It think this should actually work as in reality you store that information in various IO buffers of the numerous routers, switches and network cards. However the amount of storable information would probably be too small to have practical use, and network administrators of all levels are unlikely to enjoy and support such a creative approach.
Storing information in the delay line is a known approach and has been used to build memory devices in the past. However the past methods rely on delay during signal propagation over physical medium. As Internet mostly uses wires and electromagnetic waves that travel with the sound of light, not much information can be stored this way. Past memory devices mostly used sound waves.
I have my CF cards a couple of years now and have always taken for granted that they will store my pictures reliably. But should I? Will there be a time when they suddenly fail? And if so, what are the parameters: Age, amount of read-writes?
Assuming your CF cards are solid-state flash memory and NOT magnetic (which is probably the case since most CF cards are solid-state), your cards should probably outlive you or me, even if you throw them against the wall every day. But don't take my word for it, follow the link and read the article.
From http://en.wikipedia.org/wiki/CompactFlash :
CompactFlash cards that use flash memory, like other flash-memory
devices, are rated for a limited number of erase/write cycles for any
"block." (Read cycles do not cause wear to the device.) Cards using
NOR flash had a write endurance of 10,000 cycles. Current cards using
NAND flash are rated for 1,000,000 writes per block before hard
failure. This is less reliable than magnetic media . . .
Most CompactFlash flash-memory devices limit wear on blocks by varying
the physical location to which a block is written. This process is
called wear leveling. When using CompactFlash in ATA mode to take the
place of the hard disk drive, wear leveling becomes critical because
low-numbered blocks contain tables whose contents change frequently.
Current CompactFlash cards spread the wear-leveling across the entire
drive. The more advanced CompactFlash cards will move data that rarely
changes to ensure all blocks wear evenly.
NAND flash memory is prone to frequent soft read errors. The
CompactFlash card includes error checking and correcting (ECC) that
detects the error and re-reads the block. The process is transparent
to the user, although it may slow data access.
As flash memory devices are solid-state, they are more shock-proof
than rotating disks. For example, the ST68022CF Microdrive is shock
rated at 175G operating and 750G non-operating.
Hope this helps!