ESP32: Store and Send data via BLE frequently - memory

I'm developing a sensor based on the ESP32-DevKit board where I sense vibration from an accelerometer. The application/sensor goal is to store the accelerometer data for 20s and then send all the data through BLE.
I'm currently using the ESP32 ADC (12 bit) for a fast sampling rate (10-100KHz) to get an accurate signal. The next step is to store this signal, but it will take as size almost 2MB, so I don't know if I can store it in the ESP32 and send it later via BLE (packet by packet), therefore a lot of tasks will end up degrading the process time and Energy.
The main points are :
Fast sampling rate / accurate signal.
Sending data to phone with the lowest energy possible.
using ESP32-S2 to Store 2MB data and resend it to Phone app.
Is there any possibility of doing what I want to?

When storing the signal, have you considered compressing the data? If the accelerometer readings are very similar to the previous reading, then just storing the difference might save a lot of space, especially if you use a variable length format.
I have a project where I save GPS data, but because it is comparatively slow moving boats, the difference between two coordinates (every second or so) will be very small, so no point storing the full coordinates.

Related

Large capacity, SPI storage for PIC12f1840?

I'm using a PIC12F1840 chip with an MPU9250 accelerometer to collect movement data. I'm currently using a 1Kb, SPI RAM chip, but it gets full quite quickly, and there is also data loss while trying to read the data from it (due to the RAM's need for continuous power!).
It doesn't have to be quick, the MPU's fastest speed is the only 184Hz, and I'm planning to use it on a slower setting by default. Can someone suggest some type of memory?

Bluetooth Low Energy data transmission on iOS

I'm recently working on a project which uses Bluetooth Low Energy. I implemented most of communication protocol, however I started having concerns, that actually I don't know how the data transmission works and if the solution that I implemented is going to behave in the same way with all devices.
So my main concern is what data chunk is received when I get a notification from peripheral(_:didUpdateValueFor:error:)? Is it only as big as negotatiated MTU size? Or maybe iOS receives information about chunk size and waits to receive it all before triggering peripheral(_:didUpdateValueFor:error:)?
When a peripheral sends chunks let's say 100 bytes each, can I assume that I will get always in a single notification 100 bytes? Or could it be last 50 bytes from previous chunk and first 50 bytes from the next one? That would be quite tricky and hard to detect where is the beginning of my frame.
I tried to find more information in Apple documentation but there is nothing about it.
My guess is that I receive always a single state of characteristic. Therefore it would mean that chunks depend on implementation on peripheral side. But what if characteristic is bigger than MTU size?
First, keep in mind that sending streaming data over a characteristic is not what characteristics are designed for. The point of characteristics is to represent some small (~20 bytes) piece of information like current battery level, device name, or current heartbeat. The idea is that a characteristics will change only when the underly value changes. It was never designed to be a serial protocol. So your default assumption should be that it's up to you to manage everything about that.
You should not write more data to a characteristic than the value you get from maximumWriteValueLength(for:). Chunking is your job.
Each individual value you write will appear to the receiver atomically. Remember, these are intended to be individual values, not chunks out of a larger data stream, so it would make no sense to overlap values from the same characteristic. "Atomically" means it all arrives or none of it. So if your MTU can handle 100 bytes, and you write 100 bytes, the other side will receive 100 bytes or nothing.
That said, there is very little error detection in BLE, and you absolutely can drop packets. It's up to you to verify that the data arrived correctly.
If you're able to target iOS 11+, do look at L2CAP, which is designed for serial protocols rather than using GATT.
If you can't do that, I recommend watching WWDC 2013 Session 703, which covers this use case in detail. (I am having trouble finding a link to it anymore, however.)

Sound Synchronization Issues

We are going to develop a Project on Sound Source Localization using Labview. Still We are on intial stage and going to perform all task on Software base with four mic connected with PC (For initial stage, later on going to develop using NI hardware if possible).
Initially we acquireing sound from 4 Different Microphones connected with computer through USB. Here all microhpones acquiring sound from single sound source with some delay(mili seconds) beacuse of their different position. But this Sound data acquired by USB are not able to write to sound card simulteneously. This data of sound acquire some hold time while writing to the sound card and we are getting some delay samples while synchronizaing these all sounds. Is there any idea to reduce this hold time of sounds that writes the data to the sound card?
Suppose hold time 10ms, want to reduce this to the micro seconds of nano seconds.
Reducing of the hold time as well as precise inter-channel synchronisation are not possible with LabVIEW running under Windows, and regular sound acquisition hardware. Internal software delays comparable with the time slice are to be expected (~10ms).
You need at least dedicated acquisition hardware (not a number of USB sound cards), and, if you would like to have precise synchronisation of your output with the input with minimum jitter, you need NI-FPGA. Giving these requirements, I would look at the R-series

How to preserve data integrity while minimizing the transmission size

we have sensors in the wild that send their data to a server every day via TCP/IP, either through 3G or through satellite for the physical layer. The sensors can automatically switch from one to the other depending on their location and the quality of the signal with the local 3G operator.
Given that the 3G and satellite communications are very expensive, we want to minimize the amount of data to send. But also, we want to protect ourselves from lost data.
What would be the best strategy to ensure with reasonable certainty that the integrity of our data is preserved, while minimizing the amount of redundancy, i.e the amount of data transmitted ?
I've read about the zfec codec, but I'm not sure if we need to transmit all the chunks, or if we need to send a hash code along each chunk, or simply the minimum number of chunks and a hashcode for the whole file.

What is buffer? What are buffered reads and writes?

I heard the word buffer after a long time today and wondering if somebody can give a good overview of buffer and some examples of how it matters in today's world.
A buffer is generally a portion of memory that contains data that has not yet been fully committed to its intended device. In the case of buffered I/O, generally there is a fast device and a slow device. The devices themselves need not have disparate speeds, but perhaps the interfaces between them differ or perhaps it is more time-consuming to either produce or consume the data than the other part is.
The idea is that you temporarily store the generated data in a buffer so that it is not lost when the slower device isn't ready to handle it. Once the device is ready, the another buffer may take the current buffer's place and the consuming device will process the data in the first buffer.
In this manner, the slower device receives the data at a moderated pace rather than the fire-hose that the original data source can be.

Resources