I'm reading messages from a buffer, but the messages can be larger than the buffer size. Stitching multiple buffers together into one message is no issue, but I'm unsure what the timeout for reading a single message should be. After reading from the buffer, how long should I attempt to read from it again before timing out and considering the message complete?
Related
Context:
I am trying to replay a BLF file using python-can over a vector interface with an implementation of MessageSync iterator object and can.send operating on the yielded messages. While it functions for 20-30 seconds as expected but after that it keeps raising ERR_QUEUE_FULL exception while sending CAN messages. Have tried to handle that using can_bus.flush_tx_buffer() and can_bus.reset() but to no effect. I understand that the transmit buffer gets full while the messages are written too fast at a given segment causing buffer overflow.
Usage:
replayReaderObj = LogReader(replay_file_path)
msgSyncObj = MessageSync(messages=replayReaderObj, timestamps=True)
I am iterating via msgSyncObj using a for loop and using can.send() on messages (provided message is not an error frame). Default args of gap(0.0001) and skip(60) are considered in which case replay timestamps are considerably delayed compared to the replay file. Hence gap as 0 is included in next attempt to ensure only offset difference is considered. It aligns the replay timestamps but causes buffer overflow in few seconds.
The same replay file while run over a Vector CANoe replay blocks runs just fine without any buffer issues in given replay duration(+10%).
Question:
Can anyone shed light on whether python-can and Vector CANoe (both running on Win10 PC) has different way of configuring transmit queue buffer? Any suggestions on how I can increase the transmit queue buffer used by python-can is highly appreciated along with handling such buffer overflows(since flush_tx_buffer isn't having any impact).
Note: In Vector Hardware Configuration, transmit queue size is configured as 256 messages. I am not sure if python-can uses the same configuration before I want to change it.
Additional context
OS and version: Win 10
Python version: Python 3
python-can version: 3.3.4
python-can interface/s (if applicable): Vector VN1630
There is another real ECU for acknowledgement of Tx messages. This runs fine if I keep a decent wait time(10 ms - minimum that time.sleep() in Python Windows can provide) between consecutive messages. Drawback is that with the wait time injection, it takes 6x-7x times the actual replay time.
Let me know for any further information on top of this. Sorry, I will not be able to share the trace file as it is proprietary, but any details regarding it's nature I can get back on it.
can someone tell me which is the page write buffer of this memory fram MB85RC256V
is it 32Kbyte ?
This device doesn't have a page write buffer: it is capable of writing individual bytes as fast as you can send them, so no buffering is needed.
Buffer size is something you have to worry about on much slower memory devices, such as EEPROM, that need many milliseconds to do an erase/write cycle. Devices with a write buffer are capable of writing multiple bytes in that same time - but it's entirely your responsibility to make sure that all the bytes fit in the same page. Crossing a page boundary in the middle of a write is likely to result in some data being written to the wrong address.
I've got a dedicated thread that caputures audio from Alsa through snd_pcm_readi(). Periodically I get a short read, meaning snd_pcm_readi() returns a positive integer lower than my buffer size, and there's obviously a 'pop' sound in my audio stream. Then I set the thread priority to real-time and this gives a tangible benefit, far less short reads, but this doesn't solve.
Now the question: before going down the bumpy road of a real-time patched Linux kernel, there's something else I can do to squeeze out some more performance? Is calling snd_pcm_readi() in a dedicated thread the best way to pull audio out of Alsa?
For playback, the buffer size determines the latency.
For capture, it does not; only the period size determines how long you must wait until recorded samples are reported to be available.
So to prevent overruns, make the buffer as large as possible (e.g., by calling snd_pcm_hw_params_set_buffer_size_max() after setting the other parameters).
any pointers to detect through a script on linux that an mp3 radio stream is breaking up, i am having issues with my radio station when the internet connection slows down and causes the stream on the client side to stop, buffer and then play.
There are a few ways to do this.
Method 1: Assume constant bitrate
If you know that you will have a constant bitrate, you can measure that bitrate over time on the server and determine when it slows below a threshold. Note that this isn't the most accurate method, and won't always work. Not all streams use a constant bitrate. But, this method is as easy as counting bytes received over the wire.
Method 2: Playback on server
You can run a headless player on the server (via cvlc or similar) and track when it has buffer underruns. This will work at any bitrate and will give you a decent idea of what's happening on the clients. This sort of player setup also enables utility functions like silence detection. The downside is that it takes a little bit of CPU to decode, and a bit more effort to automate.
Method 3 (preferred): Log output buffer on source
Your source encoder will have a buffer on its output, data waiting to be sent to the server. When this buffer grows over a particular threshold, log it. This means that output over the network stalled for whatever reason. This method gets the appropriate data right from the source, and ensures you don't have to worry about clock synchronization issues that can occur over time in your monitoring of audio streams. (44.1 kHz to your encoder might be 44.101 kHz to a player.) This method might require modifying your source client.
I am developing VoIP app and need to play data from RTP packets which are sent by server every 20 ms.
I have a buffer which accumulates samples from RTP packets. Audio unit render callback reads data from this buffer.
The problem is that I cannot synchronise audio unit with RTP stream. Preferred IO buffer duration cannot be set to exactly 20 ms. And number of frames requested by render callback also cannot be set to the packet's number of samples.
As a result, there are two possible situations (depending on sample rate and IO buffer duration):
a) audio unit reads from my buffer faster than it is filled from RTP packets; in this case buffer periodically doesn't contain the requested number of samples and I get distorted sound;
b) buffer is filled faster than audio unit reads from it; in this case buffer periodically is overflowed and samples from new RTP packets are lost.
What should I do to avoid this issue?
If you have control over the packet rate, this is typically done via a "leaky bucket" algorithm. A circular FIFO/buffer can hold the "bucket" of incoming data, and a certain amount of padding needs to be kept in the FIFO/buffer to cover variations in network rate and latency. If the bucket gets too full, you ask the packet sender to slow down, etc.
On the audio playback end, various audio concealment methods (PSOLA time-pitch modification, etc.) can be used to slightly stretch or shrink the data to fit, if adequate buffer fill thresholds are exceeded.
If you are receiving audio
Try having the client automatically request periodically (ex. every second) that the server sends audio of a certain bitrate dependent on the buffer size and connection speed.
For example, have each audio sample be 300kbits large if there are, for example, 20 samples in the buffer and a 15000kbit/s speed and increase/decrease the audio sample bitrate dynamically as necessary.
If you are sending audio
Do the same, but in reverse. Have the server request periodically that the client changes the audio bitrate.