How to allocate a buffer of fixed-length characters in dart? - dart

I need to allocate a buffer of ASCII characters of fixed length because I receive a (index,7-char chunk) tuples in random order from a distant device.
In C/C++ I would do:
char buffer[3*7+1];
memcpy(&buffer[indexOfTheChunk*7],incomingChunk,7);
How should I proceed in dart?

You can create a fixed-size buffer by making a non-growable List:
var buffer = List<int>.filled(elementCount);
Dart also provides a Uint8List class specifically for creating fixed-size buffers of 8-bit bytes.
If you need to extract a Dart String from the ASCII bytes later, you can call ascii.decode on the List<int>/Uint8List.

Related

How to set length of file write function call in lua?

I'm using an embedded lua which provides an interface to access some data in C.
Specifically it gets an image blob in raw bytes. I know the size of the raw data and I'm wanting to write this blob to disk.
However, I can't figure out from the lua io package how to write data of a set length. How do I set the number of bytes that the write call will consume?
The write functions of the io library expect a string or a number as input which both have a known length.
It's not like in C where you migh give a pointer and a number of bytes to be written.

Can i define in what endianess i read from NSData?

I have some files written on an Android device, it wrote bytes in big endian.
Now i try to read this file with iOS and there i need them in small endian.
I can make a for loop and
int temp;
for(...) {
[readFile getBytes:&temp range:NSMakeRange(offset, sizeof(int))];
target_array[i] = CFSwapInt32BigToHost(temp);
// read more like that
}
However it feels silly to read every single value and turn it before i can store it. Can i tell the NSData that i want the value read with a certain byte-order so that i can directly store it where it should be ?
(and save some time, as the data can be quite large)
I also worry about errors when some datatype changes and i forget to use the 16 instead of the 32 swap.
No, you need to swap every value. NSData is just a series of bytes with no value or meaning. It is your app that understands the meaning so it is your code logic that must swap each set of bytes as needed.
The data could be filled with all kinds of values of different sizes. 8-bit values, 16-bit values, 32-bit values, etc. as well as string data or just a stream of bytes that don't need any ordering at all. And the NSData can contain any combination of these values.
Given all of this, there is no simple way to tell NSData that the bytes need to be treated in a specific endianness.
If your data is, for example, nothing but 32-bit integer values stored in a specific endianness and you want to extract an array of bytes, create a helper class that does the conversion.

Is there some byte combination that can be used as a separator of streams of Int16

I was given the task to specify a file format for internal use inside an application.
One of the intended requirements says:
The data section of the file should be made up of a series of streams of type Int16 values (short integers), delimited by a suitable combination of one or more bytes.
As I understand, Int16 can contain any single byte value, so I don't know how I could choose some sequence of bytes that is guaranteed not to appear incidentally inside a stream. Is there such a sequence?
(And also, if the answer is "no", what would be a good way to determine the position and size of each stream in the file?)
By "streams," I assume the request indicates that the length is unknown when the writing of the data begins.
Therefore, I'd suggest a "chunked" encoding, where each substream is parcelled out into variable-size pieces, with the length of each piece written at the beginning as a fixed size integer. An empty chunk signals the end of the substream. Normally, there would be a maximum length of a chunk to facilitate allocation of buffers for efficient reading and writing.
This is patterned after HTTP's "chunked" transfer encoding and a similar approach is used in many other formats, such as the indefinite length encoding supported by the basic encoding rules for ASN.1.
I would suggest prefixing each stream with a length field, rather than trying to use delimiters, for the reason you've already given (no suitable unique delimiter). E.g.:
<length>
<stream>
<length>
<stream>
<length>
<stream>
...
where <length> is, say, a 4 byte integer which defines the number of 16 bit elements in the following stream.

UTF8 Encoding and Network Streams

A client and server communicate with each other via TCP. The server and client send each other UTF-8 encoded messages.
When encoding UTF-8, the amount of bytes per character is variable. It could take one or more bytes to represent a single character.
Lets say that I am reading a UTF-8 encoded message on the network stream and it is a huge message. In my case it was about 145k bytes. To create a buffer of this size to read from the network stream could lead to an OutMemoryException since the byte array needs that amount of sequential memory.
It would be best then to read from the network stream in a while loop until the entire message is read, reading the pieces in to a smaller buffer (probably 4kb) and then decoding the string and concatenating.
What I am wondering is what happens when the very last byte of the read buffer is actually one of the bytes of a character which is represented by multiple bytes. When I decode the read buffer, that last byte and the beginning bytes of the next read would either be invalid or the wrong character. The quickest way to solve this in my mind would be to encode using a non variable encoding (like UTF-16), and then make your buffer a multiple of the amount of bytes in each character (with UTF-16 being a buffer using the power 2, UTF-32 the power of 4).
But UTF-8 seems to be a common encoding, which would leave me to believe this is a solved problem. Is there another way to solve my concern other than changing the encoding? Perhaps using a linked-list type object to store the bytes would be the way to handle this since it would not use sequential memory.
It is a solved problem. Woot woot!
http://mikehadlow.blogspot.com/2012/07/reading-utf-8-characters-from-infinite.html

Pointer to String conversion?

I am allocating the memory with GetMem (1028 bytes length), so I have an allocated Pointer.
Then I am reading the content and I know that there is e.g. 1028 bytes read.
how can I cast pointer, or convert it to a string?
Should I null terminate the content of the memory prior to conversion?
Thanks!
Use SetString. Pass it a string variable, your pointer, and the string length (1028). Delphi strings are implicitly null-terminated, so the function will add that automatically (even if your buffer already has null bytes in it).
Better yet, set the length of the string and read your data directly into it instead of using an intermediary buffer. If you must use an intermediary buffer, you may as well use one that's statically sized to 1028 bytes instead of complicating your program with dynamic memory management.

Resources