Detect how many bytes can be written to NSOutputStream - ios

Basic problem I'm try to implement:
I have two streams. NSInputStream and NSOutputStream.
Now I want to take some data from input process them (add some frames encode them and so on) and pass to output. So far so good.
Actual problem
Problem is that NSOutputStream API - write:maxLength: which can return actual number of bytes written. This value can be different from length passed. This is a problem since it requires creating extra logic to maintain some kind of buffer.
I want to avoid this. I'd like to now how many bytes output stream will accept without buffering so I could calculate how much data I should read from input stream (I will add some frames and encoding).
I don't want to maintain extra buffer.
Output stream is associated with a TCP socket, input stream can be associated with any kind of resource.

This is the apple implementation for the problem:
- (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode
{
switch(eventCode) {
case NSStreamEventHasSpaceAvailable:
{
uint8_t *readBytes = (uint8_t *)[_data mutableBytes];
readBytes += byteIndex; // instance variable to move pointer
int data_len = [_data length];
unsigned int len = ((data_len - byteIndex >= 1024) ?
1024 : (data_len-byteIndex));
uint8_t buf[len];
(void)memcpy(buf, readBytes, len);
len = [stream write:(const uint8_t *)buf maxLength:len];
byteIndex += len;
break;
}
// continued ...
}
}
In this implementation a chunck of 1024 bytes is written at a time.
And a note was provided:
There is no firm guideline on how many bytes to write at one time.
Although it may be possible to write all the data to the stream in one
event, this depends on external factors, such as the behavior of the
kernel and device and socket characteristics. The best approach is to
use some reasonable buffer size, such as 512 bytes, one kilobyte (as
in the example above), or a page size (four kilobytes).
As described it depends on the other side. don't know if that can be figured out by investigating the receiver. But, maybe the size suggested to write at a time may decrease the chance that some bytes will not be written. That logic should be implemented.

You'll have to buffer. The stream can't predict what the can be written until it makes the attempt. But you can keep the buffer as small as you like by (a) attempting to write less data at once, and (b) buffering only the data that wasn't written on the prior attempt.
The result of such an arrangement is to trade away speed for space. Consider the one byte buffer as the degenerate case.

Related

How does one create insertion or deletion mutations using LibFuzzer?

libFuzzer has functions that can be implemented by the end-user like this:
size_t LLVMFuzzerCustomMutator(
uint8_t* data, size_t size, size_t max_size, unsigned int seed)
Am I free to sometimes insert some bytes in data thereby making it larger; I assume max_size may not be exceeded? If I needed more bytes than max_bytes to perform the necessary insertion how would I do that? Do I return the new size?

Streaming ADC data through UART

I am trying to stream sampled values from 8 bit ADC through UART on the STM32 nucleo board.
I use ADC with DMA. Sample rate is around 6kHz to fill a buffer with 100 converted values takes me around 17 ms.
After that I want to send those values through UART with baudrate 115200. Since the ADC converted value is HALF_WORD for 100 converted values I have to send 1600 bits. That means I can send them for 14 ms without overwritting data.
This is my attempt in code:
/* Private variables*/
#define ADC_BUF_LEN 100
uint16_t adc_buf[ADC_BUF_LEN];
uint8_t flag = 0;
/* USER CODE BEGIN 2 */
HAL_ADC_Start_DMA(&hadc, (uint32_t*)adc_buf, ADC_BUF_LEN);
HAL_TIM_Base_Start(&htim2);
while (1)
{
if (flag==1)
{
HAL_UART_Transmit(&huart4,(uint8_t*)adc_buf,100,1);
flag = 0;
HAL_GPIO_TogglePin(GPIOA,GPIO_PIN_9);
}
else
{}
}
void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef* hadc)
{
HAL_GPIO_TogglePin(GPIOA,LED_GREEN_Pin);
flag = 1;
}
I have attached picture with the transmitted data to the terminal.
For input the ADC meet 1 kHz sine wave 2 V p-pk.
I can see with naked eye that my system is not working.
If i plot that data it wont be sine wave.
The project is for EMG signal processing: I need to sample the signal and then process it in Python.
Setting the Timeout parameter of HAL_UART_Transmit to 1 is not correct. You have already calculated it is going to take 14 ms! This means the function will give up and return after only a small proportion of the data has transmitted.
To do this more than once without gaps in the data you are going to need to use DMA on both the ADC and UART at the same time.
Enable the half-transfer interrupt for the ADC DMA, or poll for the half-transfer flag. When you receive it start the UART in DMA mode on the first half of the buffer. It should complete in 7ms, which is 1.5ms before the ADC DMA starts overwriting the data it contains. When you get the ADC DMA complete interrupt or flag, start the UART DMA on the second half of the buffer.
Alternatively, the DMA on most STM32 also support "double-buffer" mode which works more-or-less the same but you only use the complete interrupt and you have two separate data pointers rather than calculating the offset of half a buffer.

record pcmaudiodata per 10 milisecond without playback

İ need to record pcmaudio per 10 milisecond without playback in swift.
I have tried this code but i can't find how can i stop playback while recording.
RecordAudio Github Repo
and second question: How can i get PCM data from circular buffer for encode-decode process properly. When I convert recorded audio data to signed byte or unsigned byte or anything else the converted data sometimes will corrupt. What is the best practice for this kind of process
In the RecordAudio sample code, the audio format is specified as Float (32-bit floats). When doing a float to integer conversion, you have to make sure your scale and offset results in a value in legal range for the destination type. e.g. check that -1.0 to 1.0 results in 0 to 256 (unsigned byte), and out-of-range values are clipped to legal values. Also pay attention to the number of samples you convert, as an Audio Unit callback can vary the frameCount sent (number of samples returned). You most likely won't get exactly 10 mS in any single RemoteIO callback, but may have to observe a circular buffer filled by multiple callbacks, or a larger buffer that you will have to split.
When RemoteIO is running in play-and-record mode, you can usually silence playback by zeroing the bufferList buffers (after copying, analyzing, or otherwise using the data in the buffers) before returning from the Audio Unit callback.

Optimum buffer to memory ratio

I am trying to build a DAQ using Sparrow's Kmax. I have a ready template in which the total memory is 16 MB.
static final int evSize = 4; // The num of parameters per event of this type
static final int BUF_SIZE = evSize*1000; /** <------------------Why pick this buffer size*/ // Buffer size
static final int LP_MEM_TOP = 0xFFFF00; // Memory size 16MB
static final int READ_START = LP_MEM_TOP - BUF_SIZE; // We start the read/write pointer 1 buffer before the end
In the above code you can see that the buffer is very small compared to the total memory. From what know, the buffer is the temporary memory where data is stored before being sent to the computer.
In my case I am using a SCSI bus to transfer the data and the system is really slow. What can I do with the buffer to increase the speed or the performance? Is there a particular reason to have such a small buffer? I am not sure if I have understood what exactly does the memory and the buffer do.
Any help is more than welcome!!!

Is byte ordering the same across iOS devices, and does this make using htonl and ntohl unnecessary between iOS devices?

I was reading this example on how to use NSData for network messages.
When creating the NSData, the example uses:
unsigned int state = htonl(_state);
[data appendBytes:&state length:sizeof(state)];
When converting the NSData back, the example uses:
[data getBytes:buffer range:NSMakeRange(offset, sizeof(unsigned int))];
_state = ntohl((unsigned int)buffer);
Isn't it unnecessary to use htonl and ntohl in this example?
- since the data is being packed / unpacked on iOS devices, won't the byte ordering be the same, making it unnecessary to use htonl and ntohl.
- Isn't the manner in which it is used incorrect? The example uses htonl for packing, and ntohl for unpacking. But in reality, shouldn't one only do this if one knows that the sender or receiver is using a particular format?
The example uses htonl for packing, and ntohl for unpacking.
This is correct.
When a sender transfers data (integers, floats) over the network, it should reorder them to "network byte order". The receiver performs the decoding by reordering from network byte order to "host byte order".
But in reality, shouldn't one only do this if one knows that the sender or receiver is using a particular format?
Usually, a sender doesn't know the byte order of the receiver, and vice versa. Thus, in order to avoid ambiguity, one needs to define the byte order of the "network". This works well, provided sender and receiver actually do correctly encode/decode for the network.
Edit:
If you are concerned about performance of the encoding:
On modern CPUs the required machine code for byte swapping is quite fast.
On the language level, functions to encode and decode a range of bytes can be made quite fast as well. The Objective-C example in our post doesn't belong to those "fast" routines, though.
For example, since the host byte order is known at compile time, ntohl becomes an "empty" function (aka "NoOp") if the host byte order equals the network byte order.
Other byte swap utility functions, which extend on the ntoh family of macros for 64-bit, double and float values, may utilize C++ template tricks which may also become "NoOp" functions.
These "empty" functions can then be further optimized away completely, which effectively results in machine code which just performs a move from the source buffer to the destination buffer.
However, since the additional overhead for byte swapping is pretty small, these optimizations in case where swapping is not needed are only perceptible in high performance code. But your first statement:
[data getBytes:buffer range:NSMakeRange(offset, sizeof(unsigned int))];
is MUCH more expensive than the following statement
_state = ntohl((unsigned int)buffer);
even when byte swapping is required.

Resources