I’m trying to put together some Lua code that will send a command and then capture all the broken up responses that come back from a udp device.
Here is my current code
local udp = socket.udp()
udp:settimeout(0)
udp:setpeername("172.16.0.23", 65432)
local cmd = "$RS232 test message\r"
udp:send(cmd)
repeat
local data, msg = udp:receive()
if data then
print("received:", data)
end
elseif msg ~= 'timeout' then
error("Network error: "..tostring(msg))
end
until not data
But it keeps giving me the following error..
Code error: Line 12: 'until' expected (to close 'repeat' at line 7) near 'elseif'
Any ideas what I’m missing ?
A quick bit of background on the udp (serial rs232) device I’m connecting to, the data is sent as a lot of two or threee byte packets. This is because of the RS232 data rate and the device uses interupts to process the data. Basically the udp device receives a few bytes, interrupt fires, it processes those then receives a few more interrupt fires again, etc etc.
So the above repeat loop is to ensure I have captured everything it’s got for me?
Your if statement is ended prematurely.
if data then
print("received:", data)
end -- remove this end!
elseif msg ~= 'timeout' then
error("Network error: "..tostring(msg))
end
That repeat statement doesn't make too much sense imho. Why don't you use a timeout instead?
If you removed that end you will receive once with timeout 0. As your peer never had a chance to respond data will be nil and your repeat loop will terminate.
Maybe give the documentation another read.
Also I don't understand how you relate udp to rs232. That's two completely different things.
Related
I am using a CANCase VN1640A between 2 ECUs in order to falsify a CAN message. Below the bridge simulation setup:
In my CAPL Code, the received messages from channel 1 will be redirected to channel 3 and vice-versa. (So far I am not falsifying any message)
variables{
message can1. msgCAN1;
message can3. msgCAN3;
}
on message can1.{
msgCAN3=this;
if(this.dir == rx)
output(msgCAN3);
}
on message can3.{
msgCAN1 = this;
if(this.dir == rx)
output(msgCAN1);
}
But when I start CANoe I get this Error message:
This error means that CANoe tries to send more as it could. The transmit buffer is overflowed. I have changed the hardware configuration of Transmit Queue size to the max 32768 messages, also the Receive Latency to very fast but unfortunately the error occur again.
Does anyone have any hints that could help to solve this problem and thanks in advance.
The error message can mean, that CANoe tries to send more as it could. The transmit buffer is overflowed. This can have several causes:
the bus is full of high prior messages and therefore CAN hardware cannot send
You have a program which writes messages very quick to the buffer, so that the card can´t send (while loops for).
Error frames occur when sending and thus the card cannot send.
Vector tool provides a loop test:
Send messages from CH1 to CH3. If this is working fine, it looks like the problem is caused by your CANoe configuration.
The necessary test programs are part of the Vector Driver Setup Files and located in the folder Common. You can download the Driver Setup File from www.vector.com/driver-setup.
CAN Highspeed Looptest: http://kb.vector.com/entry/589/
CAN Low-speed Looptest: http://kb.vector.com/entry/590/
If the loop test works fine, you can see the time, the busload etc. If not, you will get a failed message.
Note:
Reduce the number of channels used in CANoe/CANalyzer under:
Configuration | Options | Measurement | General | Channel usage.
Are there more selected channels in the CANoe configuration than assigned CANcabs in the Vector Hardware Config?
(Start | Control Panel | Hardware and Sound | Vector Hardware)
Please check the channel and application assignment in the Vector Hardware Config.
Kindly check the hardware mapping in CANoe. This error mostly arises when the mapping is not correct or disturbed.
Go to Hardware-> Network Hardware configuration -> Driver -> Select proper channel for the vector hardware
I hope this helps !
So this error does NOT mean that CANoe tries to send more as it could.
It means instead:
We have (many) error frames on the CAN bus. CANoe tries to send messages which does not work (for whatever reason) -> error frames are the result. The CAN controller will retry to send the frame which might again lead to an error frame. Now over time the Send Requests accumulate and lead to further error frames. At some point the buffer for the error frames does overflow which leads to the message you see in the write window.
Solution:
We have to check the Trace Window and check what kind of error frames we get there (and then take suitable measures to prevent them).
If I sent a lot of messages to a remote node and immediately call erlang:disconnect_node/2 to drop the connection, is there a chance some messages don't get through the wire? In other words, does that method perform a brutal disconnection, regardless of waiting messages?
No, even with two local nodes!
Setup: I got a node a#super, on witch a dummy receive-print loop runs, registered with a. On another node, I run
(b#super)1> [{a, a#super} ! X || X <- lists:seq(0,10000)], erlang:disconnect_node(a#super).
That is, many messages, and then a brutal disconnection.
Result: the receiver printed the full 10001 messages only once over 10 runs.
So, you definitely do not have any guarantee the receiver got all the messages. You should use another technique (novice at erlang, sorry), or use an ack message before the disconnect.
I'm facing problem with TCpindy connection.readln method , I had no control in the other side sending data , when using Readln method in server side application hang (because receiving data don't contain carrige return ) , i'm trying readstring method but without success
Is there any suggestion to encouter this problem , me be looking for other component rather than indy ,
I need to get data from other client (tcp connection ) without any information about size of receiving data and without carriage return at the end of each frame.
You have to know how the data is being sent in order to read it properly. TCP is a byte stream, the sender needs to somehow indicate where one message ends and the next begins, either by:
prefixing each message with its
length
putting unique delimiters in between
each message
pausing in time between each message
Indy can handle all of these possibilities, but you need to identify which one is actually being used first.
Worse case scenerio, use the CurrentReadBuffer() method, which returns a String of whatever raw bytes are available at that moment.
I'm trying to implement support for Apple's enhanced Push Notification message format in my Rails app, and am having some frustrating problems. I clearly don't understand sockets as much as I thought I did.
My main problem is that if I send all messages correctly, my code hangs, because socket.read will block until I receive a message. Apple doesn't return anything if your messages looked OK, so my program locks up.
Here is some pseudocode for how I have this working:
cert = File.read(options[:cert])
ctx = OpenSSL::SSL::SSLContext.new
ctx.key = OpenSSL::PKey::RSA.new(cert, options[:passphrase])
ctx.cert = OpenSSL::X509::Certificate.new(cert)
sock = TCPSocket.new(options[:host], options[:port])
ssl = OpenSSL::SSL::SSLSocket.new(sock, ctx)
ssl.sync = true
ssl.connect
messages.each do |message|
ssl.write(message.to_apn)
end
if read_buffer = ssl.read(6)
process_error_response(read_buffer)
end
Obviously, there are a number of problems with this:
If I'm sending messages to a large number of devices, and the failure message is sent half way through processing, then I'm not going to actually see the error until I've already tried to send to all devices.
As mentioned earlier, if all messages were acceptable to Apple, my app will hang on the socket read call.
One way I've tried to solve this is by to reading from the socket in a separate thread:
Thread.new() {
while data = ssl.read(6)
process_error_response(data)
end
}
messages.each do |message|
ssl.write(message.to_apn)
end
ssl.close
sock.close
This doesn't seem to work. Data never seems to be read from the socket. This is probably a misunderstanding I have about how sockets are supposed to work.
The other solution I have thought of is having a non-blocking read call... but it doesn't seem like Ruby has a non blocking read call on SSLSocket until 1.9... which I unfortunately cannot use right now.
Could someone with a better understanding of socket programming please point me in the right direction?
cam is correct: the traditional way to handle this situation is with IO.select
if IO.select([ssl], nil, nil, 5)
read_buffer = ssl.read(6)
process_error_response(read_buffer)
end
This will check ssl for "readability" for 5 seconds and return ssl if it's readable or nil otherwise.
Can you use IO.select? It lets you specify a timeout, so you could at limit the amount of time you block. See the spec for details: http://github.com/rubyspec/rubyspec/blob/master/core/io/select_spec.rb
I'm interested in this too, this is another approach, unfortunately with it's own flaws.
messages.each do |message|
begin
// Write message to APNS
ssl.write(message.to_apn)
rescue
// Write failed (disconnected), read response
response = ssl.read(6)
// Unpack the binary response and print it out
command, errorCode, identifier = response.unpack('CCN');
puts "Command: #{command} Code: #{errorCode} Identifier: #{identifier}"
// Before reconnecting, the problem (assuming incorrect token) must be solved
break
end
end
This seems to work, and since I'm keeping a persistent connection, I can without problems reconnect in the rescue code and start over again.
There are some issues though. The main problem I'm looking to solve is disconnects caused by sending in incorrect device tokens (for example from development builds). If I have 100 device tokens that I send a message to, and somewhere in the middle there is an incorrect token, my code lets me know which one it was (assuming I supplied good identifiers). I can then remove the faulty token, and send the message to all devices that appeared after the faulty one (since the message didn't get sent to them). But if the incorrect token is somewhere in the end of the 100, the rescue doesn't happen until the next time I send messages.
The problem seams to be that the code isn't really in real time. If I were to send in, say, 10 messages to 10 incorrect tokens with this code, everything would be just fine, the loop will go through and no problems will be reported. It seems that write() doesn't wait for everything to clear up, and the loops runs through before the connection is terminated. The next time the loop will be run, the write() command fails (since we've actually been disconnected since the last time) and we would get the error.
If there is an alternative way to respond to the failed connection, this could solve the problem.
There is a simple way. After you write your messages, try reading in nonblocking mode:
ssl.connect
ssl.sync = true # then ssl.write() flushes immediately
ssl.write(your_packed_frame)
sleep(0.5) # so APN have time to answer
begin
error_packet = ssl.read_nonblock(6) # Read one packet: 6 bytes
# If we are here, there IS an error_packet which we need to process
rescue IO::WaitReadable
# There is no (yet) 6 bytes from APN, probably everything is fine
end
I use it with MRI 2.1 but it should work with earlier versions too.
I have been doing overlapped serial port communication in Delphi lately and there is one problem I'm not sure how to solve.
I communicate with a modem. I write a request frame (an AT command) to the modem's COM port and then wait for the modem to respond. The event mask of the port is set to EV_RXCHAR, so when I write a request, I call WaitCommEvent() and start waiting for data to appear in the input queue. When overlapped waiting for event finishes, I immediately start reading data from the queue and read all that the device sends at once:
1) write a request
2) call WaitCommEvent() and wait until waiting finishes
3) read all the data that the device sends (not only the data being in the input queue at that moment)
4) do something and then goto 1
Waiting for event finishes after first byte appears in the input queue. During my read operation, however, more bytes appear in the queue and each of them causes an internal event flag to be set. This means that when I read all the data from the queue and then call WaitCommEvent() for the second time, it will immediately return with EV_RXCHAR mask, even though there is no data to be read.
How should I handle reading and waiting for event to be sure that the event mask returned by WaitCommEvent() is always valid? Is it possible to reset the flags of the serial port so that when I read all data from the queue and call WaitCommEvent() after then, it will not return immediately with a mask that was valid before I read the data?
The only solution that comes to my mind is this:
1) write a request
2) call WaitCommEvent() and wait until waiting finishes
3) read all the data that the device sends (not only the data being in the input queue at that moment)
4) call WaitCommEvent() which should return true immediately at the same time resetting the event flag set internally
5) do something and goto 1
Is it a good idea or is it stupid? Of course I know that the modem almost always finishes its answers with CRLF characters so I could set the comm mask to EV_RXFLAG and wait for the #10 character to appear, but there are many other devices with which I communicate and they do not always send frame end characters.
Your help will be appreciated. Thanks in advance!
Mariusz.
Your solution does sound workable. I just use a state machine to handle the transitions.
(psuedocode)
ioState := ioIdle;
while (ioState <> ioFinished) and (not aborted) do
Case ioState of
ioIdle : if there is data to read then set state to ioMidFrame
ioMidframe : if data to read then read, if end of frame set to ioEndFrame
ioEndFrame : process the data and set to ioFinished
ioFinished : // don't do anything, for doc purposes only.
end;