How to write raw binary data using Indy TCP Client in C++ Builder - c++builder

Using Embarcadero C++ Builder 10.3.
I have a DynamicArray<uint8_t> myData object. I want to send/write its raw binary content (bytes) to a server using the TIdTcpClient component. I'm going about it like this:
TIdTcpClient tcpClient1;
// Bla Bla Bla
tcpClient1->IOHandler->Write(rawData);
Where rawData should be of type TIdBytes or TIdStream
So basically, it boils down to the following: How to convert myData object to a rawData type of either TIdBytes or TIdStream?

First off, TIdStream has not been part of Indy in a VERY VERY LONG time, which makes me wonder if you are using a very old version of Indy, not the one that shipped with C++Builder 10.3. Indy has supported the RTL's standard TStream class for a very long time.
That being said...
TIdBytes is an alias for System::DynamicArray<System::Byte>, where System::Byte is an alias for unsigned char, which is the same size and sign-ness as uint8_t (depending on compiler, uint8_t might even just be an alias for unsigned char).
So, the simplest solution, without having to make a separate copy of your data, is to simply type-cast it, eg:
tcpClient1->IOHandler->Write(reinterpret_cast<TIdBytes&>(myData));
This is technically undefined behavior, since DynamicArray<uint8_t> and DynamicArray<Byte> are unrelated types (unless uint8_t and Byte are both aliases for unsigned char), but it will work in your case since it is the same underlying code behind both arrays, and uint8_t and Byte have the same underlying memory layout.
Alternatively, the next simplest solution, without copying data or invoking undefined behavior, is to use Indy's TIdReadOnlyMemoryBufferStream class in IdGlobal.hpp, eg:
TIdReadOnlyMemoryBufferStream *ms = new TIdReadOnlyMemoryBufferStream(&myData[0], myData.Length);
try {
tcpClient1->IOHandler->Write(ms);
}
__finally {
delete ms;
}
Or:
{
auto ms = std::make_unique<TIdReadOnlyMemoryBufferStream>(&myData[0], myData.Length);
tcpClient1->IOHandler->Write(ms.get());
}
Otherwise, the final solution is to just copy the data into a TIdBytes, eg:
{
TIdBytes bytes;
bytes.Length = myData.Length;
memcpy(&bytes[0], &myData[0], myData.Length);
or:
std::copy(myData.begin(), myData.end(), bytes.begin());
tcpClient1->IOHandler->Write(bytes);
}

Related

What is the Delphi equivalent for LPLONG?

I have to access several functions of a DLL written in c from Delphi (currently Delphi7).
I can do it without problems when the parameters are scalar
(thanks to the examples found in this great site!), but I have been stuck for some time when in the parameters there is a pointer to an array of Longs.
This is the definition in the header file of one of the functions:
BOOL __stdcall BdcValida (HANDLE h, LPLONG opcl);
(opcl is an array of longs)
And this is a portion of my Delphi code:
type
TListaOpciones= array of LongInt; //I tried with static array too!
Popcion = ^LongInt; //tried with integer, Cardinal, word...
var
dllFunction: function(h:tHandle; opciones:Popcion):boolean;stdcall;
arrayOPciones:TListaOpciones;
resultado:boolean;
begin
.....
I give values ​​to aHandle and array arrayOPciones
.....
resultado:=dllFunction(aHandle, #arrayopciones[0]);
end;
The error message when executing it is:
"Project xxx raised too many consecutive exceptions: access violation
at 0x000 .."
What is the equivalent in Delhpi for LPLONG? Or am I calling the function in an incorrect way?
Thank you!
LONG maps to Longint, and LPLONG maps to ^Longint. So, you have translated that type correctly.
You have translated BOOL incorrectly though. It should be BOOL or LongBool in Delphi. You can use either, the former is an alias for the latter.
Your error lies in code or detail we can't see. Perhaps you didn't allocate an array. Perhaps the array is incorrectly sized. Perhaps the handle is not valid. Perhaps earlier calls to the DLL failed to check for errors.

How to load/save wxString from/to wxStream or wxMemoryBuffer?

I have my own class (nBuffer) like wxMemoryBuffer and I use it to load/save custom data, it's more convenient than using streams because I have a lot of overloaded methods for different data types based on these:
class nBuffer
{ // ...
bool wr(void* buf, long unsigned int length);// write
bool rd(void* buf, long unsigned int length);// read
}
I'm trying to implemets methods to load/save wxString from/to this buffer.
With wxWidgets 2.8 I've used the next code (simplified):
bool nBuffer::wrString(wxString s)
{ // save string:
int32 lng=s.Length()*4;
wr(&lng,4);// length
wr(s.GetData(),lng);// string itself
return true;
}
bool nBuffer::rdString(wxString &s)
{ // load string:
uint32 lng;
rd(&lng,4);// length
s.Alloc(lng);
rd(s.GetWriteBuf(lng),lng);// string itself
s.UngetWriteBuf();
s=s.Left(lng/4);
return true;
}
This code is not good because:
Is assumes there are 4 bytes of data for each string character (it might be less),
With wxWidgets 3.0, wxString.GetData() returns wxCStrData instead of *void, so the compiler fails on wr(s.GetData(),lng); and I have no idea of how to convert it to a simple byte buffer.
Strange, but I found nothing googling that for hours... Also I've found nothing useful in wxWidgets docs.
The questions are:
That is the preferred, correct and safe way to convert wxString to byte buffer,
The same about converting the byte buffer back to wxString.
For arbitrary wxStrings you need to serialize them in either UTF-8 or UTF-16 format. The former is a de facto standard for data exchange, so I advise to use it, but you could prefer UTF-16 if you know that your data is biased to the sort of characters that take less space in it than in UTF-8 and if space saving is important for you.
Assuming you use UTF-8, serializing is done using utf8_str() method:
wxScopedCharBuffer const utf8 = s.utf8_str();
wr(utf8.data(), utf8.length());
Deserializing is as simple as using wxString::FromUTF8(data, length).
For UTF-16 you would use general mb_str(wxMBConvUTF16) and wxString(data, wxMBConvUTF16, length) methods, which could also be used with wxMBConvUTF8, but the UTF-8-specific methods above are more convenient and, in some build configurations, more efficient.

Where are the parameters for Indy's TIdCompressorZLib.CompressStream Method documented?

The TIdComproessorZLib component is used for compression and decompression in the Delphi/C++ Builder Indy library. The CompressStream Method has the following definition:
public: virtual __fastcall CompressStream(TStream AInStream, TStream AOutStream, const TIdCompressionLevel ALevel, const int AWindowBits, const int AMemLevel, const int AStrategy);
The complete description of those parameters in the help file is:
CompressStream is a public overridden procedure. that implements the
abstract the virtual method declared in the ancestor class.
AInStream is the stream containing the uncompressed contents used in
the compression operation.
AOutStream is the stream used to store the compressed contents from
the compression operation. AOutStream is cleared prior to outputting
the compressed contents from the operation. When AOutStream is
omitted, the stream in AInStream is cleared and reused for the output
from the compression operation.
Use ALevel to indicate the desired compression level for the
operation.
Use AWindowsBits and AMemLevel to control the memory footprint
required to perform in-memory compression using the ZLib library.
Use AStrategy to control the RLE-encoding strategy used in the
compression operation.
ALevel's values defined on the help page for TIdCompressionLevel, but I cannot find any indication of what values should be used for AWindowBits, AMemLevel, or AStrategy, which are just integers.
I looked in the source code, but CompressStream just delegates to IndyCompressStream, which is listed in the help file as:
IndyCompressStream(TStream InStream, TStream OutStream, const int level = Z_DEFAULT_COMPRESSION, const int WinBits = MAX_WBITS, const int MemLevel = MAX_MEM_LEVEL, const int Stratagy = Z_DEFAULT_STRATEGY);
The help for IndyCompressStream doesn't even list the minimal description of the parameters that CompressStream does.
I tracked down the file where (I think) those default constants mentioned in IndyCompressStream live, source\Indy10\Protocols\IdZLibHeaders.pas, and they are
Z_DEFAULT_STRATEGY = 0;
Z_DEFAULT_COMPRESSION = -1;
MAX_WBITS = 15; { 32K LZ77 window }
MAX_MEM_LEVEL = 9;
However, the value given for Z_DEFAULT_COMPRESSION is not even a legal value for that parameter according to the documentation for TIdCompressionLevel
Is there some documentation somewhere about what AWindowBits, AMemLevel, and AStrategy mean to this component, and what values are reasonable to use for them? Are the values listed above the actual recommended defaults? Also, the source files include "indy", "Indy10", and "indyimpl" directories. Which of those should we be using to find the source for the current Indy components?
Thanks!
You will need to look to the zlib documentation in zlib.h. In particular, the parameters to deflateInit2().
In nearly all cases, the only ones you should mess with are the compression level and the window bits. For window bits, you would normally leave the window size at 32K (15), but either add 16 for the gzip format (31), or negate (-15) to get the raw deflate format with no header or trailer. For some special kinds of data, you may get an improvement with a different compression strategy, e.g. image or other numerical arrays of data.
Thank you for the comments and answers, especially Remy and Mark. I had not realized that the Indy units were wrappers around zlib, and that the parameters were defined in the zlib library.
I was trying to create a gzip format stream for uploading to a server that was expecting gzip.
Here is the working code for gzip compression and decompression:
void __fastcall TForm1::Button1Click(TObject *Sender)
{
TStringStream* streamIn = new TStringStream(String("This is some data to compress"));
TMemoryStream* streamCompressed = new TMemoryStream;
TStringStream* streamOut = new TStringStream;
/* this also works to compress to gzip format, but you must #include <IdZlib.hpp>
CompressStreamEx(streamIn, streamCompressed, Idzlib::clDefault, zsGZip); */
// NOTE: according to docs, you can leave outstream null, and instream
// will be replaced and reused, but I could not get that to work
IdCompressorZLib1->CompressStream(
streamIn, // System::Classes::TStream* AInStream,
streamCompressed, // System::Classes::TStream* AOutStream,
1, // const Idzlibcompressorbase::TIdCompressionLevel ALevel,
15 + 16, // const int AWindowBits, -- add 16 to get gzip format
8, // const int AMemLevel, -- see note below
0); // const int AStrategy);
streamCompressed->Position = 0;
IdCompressorZLib1->DecompressGZipStream(streamCompressed, streamOut);
String out = streamOut->DataString;
ShowMessage(out);
}
In particular, note that passing -1 for ALevel produces ZLib Error -2, Z_STREAM_ERROR which means invalid parameter, in spite of the defaults I had found. Also, AWindowBits normally ranges from 8 to 15, but adding 16 gives you a gzip format, and negative numbers give you a raw format, as described in the zlib documentation referenced by Mark Adler, one of the authors of the zlib library. I changed AMemLevel from Indy's default per Mark Adler's comment.
Also, as noted the CompressStreamEx function will produce gzip compression using the parameters included in the comments above.
The above was tested in RAD Studio XE3. Thanks again for your help!

Using Corba string_dup versus using pointer to const

There is something I don't get, please enlighten me.
Is there a difference between the following (client side code)?
1) blah = (const char *)"dummy";
2) blah = CORBA::string_dup("dummy");
... just googling a bit I see string_dup() returns a char * so the 2 may be equivalent.
I was thinking 2) does 2 deep copies and not 1.
I'm firing the question anyway now, please briefly confirm.
Thanks!
const char* blah = "dummy";
The C++ compiler generates a constant array of characters, null-terminated, somewhere in a data section of your executable. blah gets a pointer to it.
char* blah = CORBA::string_dup("dummy");
The function string_dup() is called with an argument that is a pointer to that constant array of characters. string_dup() then allocates memory from the free store and copies the string data into the free-store-allocated memory. The pointer to the free-store memory is returned to the caller. It is the caller's job to dispose of the memory when finished with CORBA::string_free(). Technically the ORB implementation is allowed to use some special free-store, but most likely it is just using the standard heap / free-store that the rest of your application is using.
It is often much better to do this:
CORBA::String_var s = CORBA::string_dup("dummy");
The String_var's destructor will automatically call string_free() when s goes out of scope.

BlackBerry J2ME Efficient Coding GuideLines? Could Somebody elaborate this?

I found the folliwing code sample in BlackBerry Java Development, Best Practices. Could somebody explain what the below same code means? What is the this in the code sample poining to?
Avoiding StringBuffer.append (StringBuffer)
To append a String buffer to another, a BlackBerry® Java Application should use net.rim.device.api.util.StringUtilities.append( StringBuffer dst, StringBuffer src[, int offset, int length ] ).
Code sample
public synchronized StringBuffer append(Object obj) {
if (obj instanceof StringBuffer) {
StringBuffer sb = (StringBuffer)obj;
net.rim.device.api.util.StringUtilities.append( this, sb, 0, sb )
return this;
}
return append(String.valueOf(obj));
}
StringBuffer does not offer an overload for the append() method that takes another StringBuffer. This means developers are likely to use StringBuffer.append(String str) and call .toString() on the second StringBuffer. This requires the second buffer to be turned into a string, which is immutable, and then the characters from the string are appended to the first StringBuffer. Thus every character in the second buffer is touched twice, and there is the unnecessary allocation of the String just to transfer the characters to the first StringBuffer.
The efficient way of doing this would copy each character from the second buffer onto the end of the first. However, StringBuffer does not provide any easy way of doing this. Thus the recommendation is to use StringUtilities.append(StringBuffer, StringBuffer) which is able to directly read the characters from the second buffer without copying them into an intermediate collection.
This saves the runtime of the extra copying, the runtime needed to allocate a temporary String, and the memory needed to allocate a temporary string.
It means that the StringBuffer class is not implemented efficiently. Java Strings are supposed to be immutable, that's what StringBuffer is used for. However, the StringBuffer class you're using is not efficient when using StringBuffer.append() so you need to use net.rim.device.api.util.StringUtilities. That's what the code is doing, encapsulating the use of that class in a new append() method.

Resources