How to receive UDP data in Vala? - network-programming

another Vala problem occured: I try to send and receive data via UDP. The sending works and via Wireshark I can see that the server sends the expected result. Problem is: My program doesn't get the data.
I checked and I can see that, when a socket has been created to send the UDP data, the specific port stays open, which is confirmed by Wireshark because my PC doesn't send any of those ICMP messages back to the server.
What I got so far:
try
{
SocketClient mySocket = new SocketClient();
mySocket.protocol = SocketProtocol.UDP;
mySocket.type = SocketType.DATAGRAM;
var conn = mySocket.connect (new InetSocketAddress(addr,targetPort));
conn.output_stream.write(themessage_in_a_uint8_array);
DataInputStream response = new DataInputStream (conn.input_stream);
string resp ="";
char myChar;
try
{
do
{
myChar = (char)response.read_byte();
print ("Response" + myChar.to_string());
}while(true);
}
catch(Error e)
{
print(e.message);
}
}
catch(Error e)
{print(e.message);}
What currently happens: The message is send, the string 'Response' is printed once into the console and after that it just loops.
If I check response.get_available() it returns 0.
I can check with lsof | grep used_portnumber and sure enough, the used socket stays open. What am I doing wrong?

I am not sure but this is what I suspect:
UDP is a datagram protocol (data is explicitly chopped into data). Server have sent one datagram to client. Now in BSD Sockets (and after it everywhere) if the underlaying socket have datagram type then read reads the full packet. If the buffer have insufficient length the message is truncated.
The solution is read in one byte. For example
uint8[] buffer = new uint8[1 << 16]; // Maximum UDP length - we don't loose anything
unowned string locale;
bool need_convert = GLib.get_charset (out locale);
do {
ssize_t len = response.read (buffer);
string text;
if (need_convert) {
text = GLib.convert ((string)buffer, len, locale, "UTF-8");
} else {
text = (string)buffer;
}
stdout.print("Response " + text);
} while (true);
Edit I have change the code to print UTF-8 text - without assuming current locale is "UTF-8"-based.
PS 1 This is my guess as it is one gotcha of BSD Sockets (also Winsockets and everything that builds on this) that come to my mind. Please be graceful if the question will be more specific (i.e. it is not the answer to question).
PS 2 In general I would recommend against mixing bytes and chars. While in ASCII-compatible encodings (ISO, UTF-8) sending ASCII subset of chars is safe it will bite when attempt on CJK encodings or if sender will send 'ą' by UTF-8 and sender will treat it as ISO-8859-2 (where this character have different encoding). I assume it is for the toy-examples only. If not you may want to read What Every Programmer Absolutely, Positively Needs To Know About Encodings And Character Sets To Work With Text.

Related

How to remove non-ascii char from MQ messages with ESQL

CONCLUSION:
For some reason the flow wouldn't let me convert the incoming message to a BLOB by changing the Message Domain property of the Input Node so I added a Reset Content Descriptor node before the Compute Node with the code from the accepted answer. On the line that parses the XML and creates the XMLNSC Child for the message I was getting a 'CHARACTER:Invalid wire format received' error so I took that line out and added another Reset Content Descriptor node after the Compute Node instead. Now it parses and replaces the Unicode characters with spaces. So now it doesn't crash.
Here is the code for the added Compute Node:
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
DECLARE NonPrintable BLOB X'0001020304050607080B0C0E0F101112131415161718191A1B1C1D1E1F7F808182838485868788898A8B8C8D8E8F909192939495969798999A9B9C9D9E9FA0A1A2A3A4A5A6A7A8A9AAABACADAEAFB0B1B2B3B4B5B6B7B8B9BABBBCBDBEBFC0C1C2C3C4C5C6C7C8C9CACBCCCDCECFD0D1D2D3D4D5D6D7D8D9DADBDCDDDEDFE0E1E2E3E4E5E6E7E8E9EAEBECEDEEEFF1F2F3F4F5F6F7F8F9FAFBFCFDFEFF';
DECLARE Printable BLOB X'20202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020';
DECLARE Fixed BLOB TRANSLATE(InputRoot.BLOB.BLOB, NonPrintable, Printable);
SET OutputRoot = InputRoot;
SET OutputRoot.BLOB.BLOB = Fixed;
RETURN TRUE;
END;
UPDATE:
The message is being parsed as XML using XMLNSC. Thought that would cause a problem, but it does not appear to be.
Now I'm using PHP. I've created a node to plug into the legacy flow. Here's the relevant code:
class fixIncompetence {
function evaluate ($output_assembly,$input_assembly) {
$output_assembly->MRM = $input_assembly->MRM;
$output_assembly->MQMD = $input_assembly->MQMD;
$tmp = htmlentities($input_assembly->MRM->VALUE_TO_FIX, ENT_HTML5|ENT_SUBSTITUTE,'UTF-8');
if (!empty($tmp)) {
$output_assembly->MRM->VALUE_TO_FIX = $tmp;
}
// Ensure there are no null MRM fields. MessageBroker is strict.
foreach ($output_assembly->MRM as $key => $val) {
if (empty($val)) {
$output_assembly->MRM->$key = '';
}
}
}
}
Right now I'm getting a vague error about read only messages, but before that it wasn't working either.
Original Question:
For some reason I am unable to impress upon the senders of our MQ
messages that smart quotes, endashes, emdashes, and such crash our XML
parser.
I managed to make a working solution with SQL queries, but it wasted
too many resources. Here's the last thing I tried, but it didn't work
either:
CREATE FUNCTION CLEAN(IN STR CHAR) RETURNS CHAR BEGIN
SET STR = REPLACE('–',STR,'–');
SET STR = REPLACE('—',STR,'—');
SET STR = REPLACE('·',STR,'·');
SET STR = REPLACE('“',STR,'“');
SET STR = REPLACE('”',STR,'”');
SET STR = REPLACE('‘',STR,'&lsqo;');
SET STR = REPLACE('’',STR,'’');
SET STR = REPLACE('•',STR,'•');
SET STR = REPLACE('°',STR,'°');
RETURN STR;
END;
As you can see I'm not very good at this. I have tried reading about
various ESQL string functions without much success.
So in ESQL you can use the TRANSLATE function.
The following is a snippet I use to clean up a BLOB containing non-ASCII low hex values so that it then be cast into a usable character string.
You should be able to modify it to change your undesired characters into something more benign. Basically each hex value in NonPrintable gets translated into its positional equivalent in Printable, in this case always a full-stop i.e. x'2E' in ASCII. You'll need to make your BLOB's long enough to cover the desired range of hex values.
DECLARE NonPrintable BLOB X'000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F202122232425262728292A2B2C2D2E2F303132333435363738393A3B3C3D3E3F';
DECLARE Printable BLOB X'2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E';
SET WorkBlob = TRANSLATE(WorkBlob, NonPrintable, Printable);
BTW if messages with invalid characters only come in every now and then I'd probably specify BLOB on the input node and then use something similar to the following to invoke the XMLNSC parser.
CREATE LASTCHILD OF OutputRoot DOMAIN 'XMLNSC'
PARSE(InputRoot.BLOB.BLOB CCSID InputRoot.Properties.CodedCharSetId ENCODING InputRoot.Properties.Encoding);
With the exception terminal wired up you can then correct the BLOB's of any messages containing parser breaking invalid characters before attempting to reparse.
Finally my best wishes as I've had a number of battles over the years with being forced to correct invalid message content in the "Integration Layer" after all that's what it's meant to do.

Serving files with IdHTTPServer when the files are being written

I'm working with a TIdHTTPServer to serve files to clients, using the ResponseInfo->ServeFile function. This works fine for files that are "static": not being written by some other process. As far as I can see from the code, the ServeFile function internally uses a TIdReadFileExclusiveStream, which disallows me from reading a file being written, but I need to be able to send also files that are being written by some other process.
So, I moved to create a FileStream myself and use the ContentStream property to return it to the client, but I get a 0 bytes file in the client (for any file, being written or not), and I can't see what I'm missing or doing wrong. Here is the code I'm using on the OnCommandGet event handler:
AResponseInfo->ContentStream = new TFileStream(path, fmOpenRead | fmShareDenyNone);
AResponseInfo->ContentStream->Position = 0;
AResponseInfo->ContentLength = AResponseInfo->ContentStream->Size;
AResponseInfo->ResponseNo = 200;
AResponseInfo->WriteHeader();
AResponseInfo->WriteContent();
The ContentLength property at this point has a valid value (i.e., the file size when calling ContentStream->Size), and that's what I would like to send to the client, even if the file changes in between.
I have tried removing the WriteContent() function, the WriteHeader(), but the results are the same. I searched for some examples but the few I found are more or less the same than this code, so I don't know what's wrong. Most examples don't include the WriteContent() call, that's why I have tried removing them, but there doesn't seem to be any difference.
As a side note: the files being written take 24 hours to finish writing, but that's to be expected from the client side: I just need the bytes already written at the time of the request (even somewhat less is valid). The files will never get deleted: they will just keep getting bigger.
Any ideas?
Update
Using Fiddler, I get some warnings on protocol violations, that would be related to this. I get, for instance:
Content-Length mismatch: Response Header indicated 111,628,288 bytes, but server sent 41 bytes
The content length is correct, it's the file size, but I don't know what I'm doing wrong that makes the app sent just 41 bytes.
WriteHeader() and WriteContent() expect the ContentStream to be complete and unchanging at the time they are called. WriteHeader() creates a Content-Length header using the current ContentStream->Size value if the AResponseInfo->ContentLength property is -1 (you are actually setting the value yourself), and WriteContent() sends only as many bytes as the current ContentStream->Size value says. So your client is receiving 0 bytes because the file Size is still 0 at the time you are calling WriteHeader() and WriteContent().
Neither ServeFile() nor ContentStream are suitable for your needs. Since the file is being written live, you do not know the final file size when the HTTP headers are created and sent to the client. So you must use HTTP 1.1's chunked transfer coding to send the file data. That will allow you to send the file data in chunks as the file is being written, and then signal the client when the file is finished.
However, TIdHTTPServer does not natively support sending chunked responses, so you will have to implement it manually, eg:
TFileStream *fs = new TFileStream(path, fmOpenRead | fmShareDenyNone);
try
{
AResponseInfo->ResponseNo = 200;
AResponseInfo->TransferEncoding = "chunked";
AResponseInfo->WriteHeader();
TIdBytes buffer;
buffer.Length = 1024;
do
{
int NumRead = fs->Read(&buffer[0], 1024);
if (NumRead == -1) RaiseLastOSError();
if (NumRead == 0)
{
// check for EOF, unless you have another way to detect it...
Sleep(1000);
NumRead = fs->Read(&buffer[0], 1024);
if (NumRead <= 0) break;
}
// send the current chunk
AContext->Connection->IOHandler->WriteLn(IntToHex(NumRead));
AContext->Connection->IOHandler->Write(buffer, NumRead);
AContext->Connection->IOHandler->WriteLn();
}
while (true);
// send the last chunk to signal EOF
AContext->Connection->IOHandler->WriteLn("0");
// send any trailer headers you need, if any...
// finish the transfer encoding
AContext->Connection->IOHandler->WriteLn();
}
__finally
{
delete fs;
}
The final working code is:
std::unique_ptr< TFileStream >fs(new TFileStream(path, fmOpenRead | fmShareDenyNone));
fs->Position = 0;
__int64 size = fs->Size;
AResponseInfo->ContentLength = size;
AResponseInfo->ResponseNo = 200;
AResponseInfo->WriteHeader();
AContext->Connection->IOHandler->Write(fs.get(), size);
This allows the client to receive up to size bytes of the original file, even if the file is being written to at the same time.
For some reason passing the ContentStream did not return any content to the client, but doing the IOHandler->Write directly (which is what the ServeFile ends doing internally) works fine.

send xml data via socket with message (VLI) Ruby

Ruby novice. First time post so excuse any communication protocol inadequacies :)
This site has been a great help and a "HUGE!!!" shoutout of thanks to all.
I need to connect my rails app to an electricity providers api so I can vend electricity to my web customers. I'm needing some help to simply get an initial request sent to the API
IP: 41.204.194.188
Port: 8945
First block: What is a message variable length indicator (VLI)?
"2 bytes precede every message sent to/from BizSwitch. The 2 bytes are referred to as a variable length indicator. Bytes 1-2 indicate the number of bytes in the message (excluding the first 2 bytes). The 2 bytes represent a 16bit unsigned integer in network byte order. Note that if a compressed message is being sent, the message will have to first be compressed, in order to determine its length, before being sent."
Ignore compression.
link to api doc: https://dl.dropboxusercontent.com/u/3815995/Ipay-prepaidElecTransactionSpec.pdf
Simple Vend Request example:
<ipayMsg client="ipay" term="1" seqNum="0" time="2002-05-16 10:55:30 +0200">
<elecMsg ver="2.37">
<vendReq>
<ref>136105500001</ref>
<amt cur="ZAR">11400</amt>
<numTokens>1</numTokens>
<meter>A12C3456789</meter>
<payType>cash</payType>
</vendReq >
</elecMsg>
</ipayMsg>
Simple Vend Response example
<ipayMsg client="ipay" term="1" seqNum="0" time="2002-05-16 10:55:35 +0200">
<elecMsg ver="2.37"
<vendRes> <ref>136105500001</ref>
<res code="elec000">OK</res>
<util addr="Megawatt Park, Contact Centre tel 086-003-7566" taxRef="4740101508" distId="6004708001509">Eskom Online</util>
<stdToken units="346.34" rctNum="12345678" amt="10000" tax="1400">12345678901234567890</stdToken>
<rtlrMsg>060000 Warning: This meter is not configured for FBE.</rtlrMsg>
<customerMsg>Meter not registered for Free Basic Electricity. Please apply at your local office.</customerMsg>
</vendRes>
</elecMsg>
</ipayMsg>
I've got this far and I seem connected but how do I actually send and receive responses? I've tried googling for help but yet to find how to send the XML packet and then receive the response.
#!/usr/bin/env ruby
require 'socket'
begin
socket = TCPSocket.new('41.204.194.188', 8945)
rescue => e
puts "error: #{e}"
else
puts "connected"
end
socket.close
Would appreciate any assistance or a nudge in the right direction.
Kind regards,
Jamie
Great I figured it out. Main issue regarding communication to the socket was sending a message variable length indicator. This stackoverflow question put me on the write path "Ruby - How to represent message length as 2 binary bytes"
Step 1: determine the length of my xml message length = message.size
The first field in the header must be the message length which is defined as a 2 binary byte message length in network byte order.
Step 2: message_variable_length_indicator = [length].pack("n")
Step 3: Connect to socket streamSock = TCPSocket::new('41.204.194.188', 8945)
Step 4: streamSock.write(message_variable_length_indicator)
Step 5: streamSock.write(message)
Step 6: get a response str = streamSock.recvfrom(1000)
Now to deal with timeout but at least I'm connecting :)
I have done this in PHP may be you will get idea from below code its working fine for my request.
function sendSocketRequest($XmlString, $Socket_Request) {
if (!($sock = socket_create(AF_INET, SOCK_STREAM, 0))) {
$errorcode = socket_last_error();
$errormsg = socket_strerror($errorcode);
die("Couldn't create socket: [$errorcode] $errormsg \n");
}
if (!socket_connect($sock, $Socket_Request['HostName'], $Socket_Request['Port'])) {
$errorcode = socket_last_error();
$errormsg = socket_strerror($errorcode);
die("Could not connect: [$errorcode] $errormsg \n");
}
$status = socket_write($sock, pack_int32be(strlen($XmlString)), 4);
$status = socket_write($sock, $XmlString, strlen($XmlString));
$response = socket_read($sock, $this->_socketReadLength);
socket_close($sock);
return substr($response, 2);
}
function pack_int32be($i) {
if ($i < -2147483648 || $i > 2147483647) {
die("Out of bounds");
}
return pack('C4', ($i >> 24) & 0xFF, ($i >> 16) & 0xFF, ($i >> 8) & 0xFF, ($i >> 0) & 0xFF
);
}
$socketResponse = sendSocketRequest($yourXMLString, array('HostName'=>'<HostName>','Port'=>'<Port>');

readByteSync - is this behavior correct?

stdin.readByteSync has recently been added to Dart.
Using stdin.readByteSync for data entry, I am attempting to allow a default value and if an entry is made by the operator, to clear the default value. If no entry is made and just enter is pressed, then the default is used.
What appears to be happening however is that no terminal output is sent to the terminal until a newline character is entered. Therefore when I do a print() or a stdout.write(), it is delayed until newline is entered.
Therefore, when operator enters first character to override default, the default is not cleared. IE. The default is "abc", data entered is "xx", however "xxc" is showing on screen after entry of "xx". The "problem" appears to be that no "writes" to the terminal are sent until newline is entered.
While I can find an alternative way of doing this, I would like to know if this is the way readByteSync should or must work. If so, I’ll find an alternative way of doing what I want.
// Example program //
import 'dart:io';
void main () {
int iInput;
List<int> lCharCodes = [];
print(""); print("");
String sDefault = "abc";
stdout.write ("Enter data : $sDefault\b\b\b");
while (iInput != 10) { // wait for newline
iInput = stdin.readByteSync();
if (iInput == 8 && lCharCodes.length > 0) { // bs
lCharCodes.removeLast();
} else if (iInput > 31) { // ascii printable char
lCharCodes.add(iInput);
if (lCharCodes.length == 1)
stdout.write (" \b\b\b\b chars cleared"); // clear line
print ("\nlCharCodes length = ${lCharCodes.length}");
}
}
print ("\nData entered = ${new String.fromCharCodes(lCharCodes).trim()}");
}
Results on Command screen are :
c:\Users\Brian\dart-dev1\test\bin>dart testsync001.dart
Enter data : xxc
chars cleared
lCharCodes length = 1
lCharCodes length = 2
Data entered = xx
c:\Users\Brian\dart-dev1\test\bin>
I recently added stdin.readByteSync and readLineSync, to easier create small scrips reading the stdin. However, two things are still missing, for this to be feature-complete.
1) Line mode vs Raw mode. This is basically what you are asking for, a way to get a char as soon as it's printed.
2) Echo on/off. This mode is useful for e.g. typing in passwords, so you can disable the default echo of the characters.
I hope to be able to implement and land these features rather soon.
You can star this bug to track the development of it!
This is common behavior for consoles. Try to flush the output with stdout.flush().
Edit: my mistake. I looked at a very old revision (dartlang-test). The current API does not provide any means to flush stdout. Feel free to file a bug.

twisted buffer full in tcp connection

I´m having problems with receiving long data (>1024bytes) in a simple twisted server implementation.
From the beginning, I´m developing an ios App that has to synchronize with a twisted server. I prepare the information to send in JSON format. Then I start to send that data in chuncks (right now in chunck of 256bytes + 4 bytes for the command - Yes, I´m implementing my own protocol). The connection is ok, and I receive those packet´s in my server (in the dataReceived function of my own Protocol subclass).
The ios method: NSInteger writtenBytes =[self.outputStream write:[data bytes] maxLength:[data length]] return the written bytes into the stream. For the first 4 packets the value returned is the expected (260 bytes). If I have more available bytes to send, the next time I call that method it returns 0 (which apple documentation says: "If the receiver is a fixed-length stream and has reached its capacity, 0 is returned.").
So I deduce that the input buffer is full. I don´t know how to free that buffer (I don´t know how to reach that buffer). I don't know where is the limit of that buffer (it seems to me almost ridiculous).
This is a basic test of the server (Just the important things for this question with a basic based in strings protocol)
from twisted.internet.protocol import Protocol, Factory
from twisted.internet import reactor
class IphoneSync(Protocol):
def __init__(self):
self.__buffer = ""
def connectionMade(self):
self.transport.write("0:")
self.factory.clients.append(self)
print "clients are ", self.factory.clients
def connectionLost(self, reason):
self.factory.clients.remove(self)
def dataReceived(self, data):
#print "data is ", data
a = data.split(':')
if len(a) > 1:
command = a[0]
content = a[1]
msg = ""
if command == "iam":
#user&Pass checking
msg = "1"
elif command == "msg":
self.__buffer += data
msg = "1: continue"
elif command == "fin":
#procesaremos todo
#Convertir datos en json
#insertar/actualizar data en sqlite
#devolver respuesta
print "buffer is", self.__buffer
msg = "2: procesing"
print msg
self.transport.write(msg)
#for c in self.factory.clients:
#c.message(msg)
def message(self, message):
self.transport.write(message)
#self.transport.write(message + '\n')
factory = Factory()
factory.protocol = IphoneSync
factory.clients = []
dir(factory)
reactor.listenTCP(8000, factory)
print "Iphone Chat server started"
reactor.run()
I saw the LineReceiver class but i´m not sending lines. The transfered data could be very big (10Mb-50Mb). I´m thinking about the Consumer/Producer model, or RPC Protocols like (AMP, or PB) as a solution but i wanted to work with my own protocol.
If someone knows how to help me, i´ll appreciate very much. Thanks anyway.
The connection is ok, and I receive those packet´s in my server (in the dataReceived function of my own Protocol subclass).
Probably not. TCP is a "stream oriented" protocol. Your application's use of it is not in terms of packets but in terms of a sequence of bytes. There is no guarantee whatsoever that dataReceived will be called with the same string that you passed to outputStream write. If you write "hello, world", dataReceived may be called with "hello, world" - or it may be called twice, first with "hello," and then with " world". Or it may be called 12 times: first "h", then "e", then "l", etc.
And if you call outputStream write twice, once with "hello," and once with " world", then it's entirely possible dataReceived will be called just once with "hello, world". Or perhaps twice, but with "h" and then "ello, world".
So this brand new protocol you're inventing (which I see you mentioned you recognized you were doing, but you didn't explain why this is a good idea or an important part of your application, instead of just a large source of potential bugs and a poor use of time :) has to do something called "framing" in order to let you actually interpret the byte sequence being passed around. This is why there are protocols like AMP.
To actually answer your question, outputStream write returns the number of bytes it was actually able to buffer for sending. You must always check its return value and re-try writing any bytes it wasn't able to send, preferably after waiting for notification that there is more buffer space. Buffer space becomes available after bytes using that space are sent across the network and acknowledged by the receiver. This takes time, as networks are not instantaneous. Notification about buffer space being available comes in many forms, the oldest and most widespread of which (but not necessarily the best in your environment), the select(2) system call.
In addition to Jean-Paul Calderone's answer (ensuring that data are being sent completely from the obj-c side by using select or thread), for protocol part I would suggest using length-prefixed string (AKA Netstring) for simple use case.
Here's an implementation. Whenever something is received, you need to call NSBuffer.write then NSBuffer.extract to get available strings.

Resources