I have a device that sends serial data over a USB to COM port to my program at various speeds and lengths.
Within the data there is a chunk of several thousands bytes that starts and ends with special distinct code ('FDDD' for start, 'FEEE' for end).
Due to the stream's length, occasionally not all data is received in one piece.
What is the recommended way to combine all bytes into one message BEFORE parsing it?
(I took care of the buffer size, but have no control over the serial line quality, and can not use hardware control with USB)
Thanks
One possible way to accomplish this is to have something along these lines:
# variables
# buffer: byte buffer
# buffer_length: maximum number of bytes in the buffer
# new_char: char last read from the UART
# prev_char: second last char read from the UART
# n: index to the buffer
new_char := 0
loop forever:
prev_char := new_char
new_char := receive_from_uart()
# start marker
if prev_char = 0xfd and new_char = 0xdd
# set the index to the beginning of the buffer
n := 0
# end marker
else if prev_char = 0xfe and new_char = 0xee
# the frame is ready, do whatever you need to do with a complete message
# the length of the payload is n-1 bytes
handle_complete_message(buffer, n-1)
# otherwise
else
if n < buffer_length - 1
n := n + 1
buffer[n] := new_char
A few tips/comments:
you do not necessarily need a separate start and end markers (you can the same for both purposes)
if you want to have two-byte markers, it would be easier to have them with the same first byte
you need to make sure the marker combinations do no occur in your data stream
if you use escape codes to avoid the markers in your payload, it is convenient to take care of them in the same code
see HDLC asynchronous framing (simply to encode, simple to decode, takes care of the escaping)
handle_complete_message usually either copies the contents of buffer elsewhere or swaps another buffer instead of buffer if in hurry
if your data frames do not have integrity checking, you should check if the payload length is equal to buffer_length- 1, because then you may have an overflow
After several tests, I came up with the following simple solution to my own question (for c#).
Shown is a minimal simplified solution. Can add length checking, etc.
'Start' and 'End' are string markers of any length.
public void comPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
SerialPort port = (SerialPort)sender;
inData = port.ReadExisting();
{
if (inData.Contains("start"))
{
//Loop to collect all message parts
while (!inData.Contains("end"))
inData += port.ReadExisting();
//Complete by adding the last data chunk
inData += port.ReadExisting();
}
//Use your collected message
diaplaydata(inData);
Related
module delay(
input [11:0] data_in,
input delay_clk, //here i will use a 20kHz clk
output reg [11:0] data_out
);
reg[11:0]memory[0:20000];
reg[15:0]write_index;//i
reg[15:0]read_index;//j
initial begin
write_index = 16'b0000000000000000;
read_index = 16'b0100111000100000;
end
always#(posedge delay_clk) begin
read_index = read_index+1;
memory [write_index] <= data_in;
data_out <= memory[read_index];
end
endmodule
I want to make a 1 second delay by using the circular memory.
I generate bitstream and program it to FGPA but there is no sound come out.
So how can i improve this verilog codes?
You don't increment write_index anywhere (like you do with read_index), it will cause this hardware to overwrite 1 memory cell over and over again. I don't think this is what you wanted, it beats the idea of using a memory array.
If you intend to store read_index in some kind of a memory element use <= assignment like the ones for memory and data_out.
Your write_index and read_index are a 16 bit variables. You use them to iterate over a 20k cells memory. They will overflow at some point. Is this intentional? I mean why 16bits and is it ok for those indexes to point at e.g. address 41k?
How can I make a memory module in which DATA bus width are passed as parameter to each instances and my design re-configure itself according to the parameter? For example, assuming I have byte addressable memory and DATA-IN bus width is 32 bit (4 bytes written in each cycle) and DATA-OUT is 16 bits (2 bytes read each cycle). For other instance DATA-IN is 64 bits and DATA-OUT is 16 bits. For all such instances my design should work.
What I have tried is to generate write pointer values according to design parameters, e.g. DATA-IN 32 bit, write pointer will increment 4 every cycle while writing. For 64 bit -increment will be by 8 and so on.
Problem is: how to make 4 or 8 or 16 bytes to be written in single cycle according to parameters passed to instance?
//Something as following I want to implement. This memory instance can be considered as internal memory of FIFO having different datawidth for reading and writing in case you think of an application of such memory
module mem#(parameter DIN=16, parameter DOUT=8, parameter ADDR=4,parameter BYTE=8)
(
input [DIN-1:0] din,
output [DOUT-1:0] dout,
input wen,ren,clk
);
localparam DEPTH = (1<<ADDR);
reg [BYTE-1:0] mem [0:DEPTH-1];
reg wpointer=5'b00000;
reg rpointer=5'b00000;
reg [BYTE-1:0] tmp [0:DIN/BYTE-1];
function [ADDR:0] ptr;
input [4:0] index;
integer i;
begin
for(i=0;i<DIN/BYTE;i=i+1) begin
mem[index] = din[(BYTE*(i+1)-1):BYTE*(i)]; // something like this I want to implement, I know this line is not allowed in verilog, but is there any alternative to this?
index=index+1;
end
ptr=index;
end
endfunction
always #(posedge clk) begin
if(wen==1)
wpointer <= wptr(wpointer);
end
always #(posedge clk) begin
if(ren==1)
rpointer <= ptr(rpointer);
end
endmodule
din[(BYTE*(i+1)-1):BYTE*(i)] will not compile in Verilog because the MSB and LSB select bits are both variables. Verilog requires a known range. +: is for part-select (also known as a slice) allows a variable select index and a constant range value. It was introduced in IEEE Std 1364-2001 § 4.2.1. You can also read more about it in IEEE Std 1800-2012 § 11.5.1, or refer to previously asked questions: What is `+:` and `-:`? and Indexing vectors and arrays with +:.
din[BYTE*i +: BYTE] should work for you, alternatively you can use din[BYTE*(i+1)-1 -: BYTE].
Also, you should use non-blocking assignments (<=) to mem. In your code read and write can happen at the same time. With blocking there is a race condition between when accessing the same byte. It may synthesize, but your RTL and gate simulation may generated different results. I also strongly advice agent using functions for assigning memory. Functions in synthesizable code without nasty surprises need to self contained without references on anything outside of the function and any internal variables are always reset to a static constant at the start of the function.
With the guidelines mentioned above, I'd recommend recoding to something like the below. This is a template to start with, not a free lunch. I left out the out-of-range index compensation for you to figure out on your own.
...
localparam DEPTH = (1<<ADDR);
reg [BYTE-1:0] mem [0:DEPTH-1];
reg [ADDR-1:0] wpointer, rpointer;
integer i;
initial begin // init values for pointers (FPGA, not ASIC)
wpointer = {ADDR{1'b0}};
rpointer = {ADDR{1'b0}};
end
always #(posedge clk) begin
if (ren==1) begin
for(i=0; i < DOUT/BYTE; i=i+1) begin
dout[BYTE*i +: BYTE] <= mem[rpointer+i];
end
rpointer <= rpointer + (DOUT/BYTE);
end
if (wen==1) begin
for(i=0; i < DIN/BYTE; i=i+1) begin
mem[wpointer+i] <= din[BYTE*i +: BYTE];
end
wpointer <= wpointer + (DIN/BYTE);
end
end
I'm trying to write a dissector for the Safari Remote Debug protocol which is based on bplists and have been reasonably successful (current code is here: https://github.com/andydavies/bplist-dissector).
I'm running into difficultly with reassembling packets though.
Normally the protocol sends a packet with 4 bytes containing the length of the next packet, then the packet with the bplist in.
Unfortunately some packets from the iOS simulator don't follow this convention and the four bytes are either tagged onto the front of the bplist packet, or onto the end of the previous bplist packet, or the data is multiple bplists.
I've tried reassembling them using desegment_len and desegment_offset as follows:
function p_bplist.dissector(buf, pkt, root)
-- length of data packet
local dataPacketLength = tonumber(buf(0, 4):uint())
local desiredPacketLength = dataPacketLength + 4
-- if not enough data indicate how much more we need
if desiredPacketLen > buf:len() then
pkt.desegment_len = dataPacketLength
pkt.desegment_offset = 0
return
end
-- have more than needed so set offset for next dissection
if buf:len() > desiredPacketLength then
pkt.desegment_len = DESEGMENT_ONE_MORE_SEGMENT
pkt.desegment_offset = desiredPacketLength
end
-- copy data needed
buffer = buf:range(4, dataPacketLen)
...
What I'm attempting to do here is always force the size bytes to be the first four bytes of a packet to be dissected but it doesn't work I still see a 4 bytes packet, followed by a x byte packet.
I can think of other ways of managing the extra four bytes on the front, but the protocol contains a lookup table thats 32 bytes from the end of the packet so need a way of accurately splicing the packet into bplists.
Here's an example cap: http://www.cloudshark.org/captures/2a826ee6045b #338 is an example of a packet where the bplist size is at the start of the data and there are multiple plists in the data.
Am I doing this right (looking other questions on SO, and examples around the web I seem to be) or is there a better way?
TCP Dissector packet-tcp.c has tcp_dissect_pdus(), which
Loop for dissecting PDUs within a TCP stream; assumes that a PDU
consists of a fixed-length chunk of data that contains enough information
to determine the length of the PDU, followed by rest of the PDU.
There is no such function in lua api, but it is a good example how to do it.
One more example. I used this a year ago for tests:
local slicer = Proto("slicer","Slicer")
function slicer.dissector(tvb, pinfo, tree)
local offset = pinfo.desegment_offset or 0
local len = get_len() -- for tests i used a constant, but can be taken from tvb
while true do
local nxtpdu = offset + len
if nxtpdu > tvb:len() then
pinfo.desegment_len = nxtpdu - tvb:len()
pinfo.desegment_offset = offset
return
end
tree:add(slicer, tvb(offset, len))
offset = nxtpdu
if nxtpdu == tvb:len() then
return
end
end
end
local tcp_table = DissectorTable.get("tcp.port")
tcp_table:add(2506, slicer)
I am using Delphi 6 Pro with the DSPACK DirectShow component library to create a DirectShow filter that delivers data in Wav format from a custom audio source. Just to be very clear, I am delivering the raw PCM audio samples as Byte data. There are no Wave files involved, but other Filters downstream in my Filter Graph expect the output pin to deliver standard WAV format sample data in Byte form.
Note: When I get the data from the custom audio source, I format it to the desired number of channels, sample rate, and bits per sample and store it in a TWaveFile object I created. This object has a properly formatted TWaveFormatEx data member that is set correctly to reflect the underlying format of the data I stored.
I don't know how to properly set up the MediaType parameter during a GetMediaType() call:
function TBCPushPinPlayAudio.GetMediaType(MediaType: PAMMediaType): HResult;
.......
with FWaveFile.WaveFormatEx do
begin
MediaType.majortype := (1)
MediaType.subtype := (2)
MediaType.formattype := (3)
MediaType.bTemporalCompression := False;
MediaType.bFixedSizeSamples := True;
MediaType.pbFormat := (4)
// Number of bytes per sample is the number of channels in the
// Wave audio data times the number of bytes per sample
// (wBitsPerSample div 8);
MediaType.lSampleSize := nChannels * (wBitsPerSample div 8);
end;
What are the correct values for (1), (2), and (3)? I know about the MEDIATYPE_Audio, MEDIATYPE_Stream, and MEDIASUBTYPE_WAVE GUID constants, but I am not sure what goes where.
Also, I assume that I need to copy the WaveFormatEx stucture/record from the my FWaveFile object over to the pbFormat pointer (4). I have two questions about that:
1) I assume that should use CoTaskMemAlloc() to create a new TWaveFormatEx object and copy my FWaveFile object's TWaveFormatEx object on to it, before assigning the pbFormat pointer to it, correct?
2) Is TWaveFormatEx the correct structure to pass along? Here is how TWaveFormatEx is defined:
tWAVEFORMATEX = packed record
wFormatTag: Word; { format type }
nChannels: Word; { number of channels (i.e. mono, stereo, etc.) }
nSamplesPerSec: DWORD; { sample rate }
nAvgBytesPerSec: DWORD; { for buffer estimation }
nBlockAlign: Word; { block size of data }
wBitsPerSample: Word; { number of bits per sample of mono data }
cbSize: Word; { the count in bytes of the size of }
end;
UPDATE: 11-12-2011
I want to highlight one of the comments by #Roman R attached to his accepted reply where he tells me to use MEDIASUBTYPE_PCM for the sub-type, since it is so important. I lost a significant amount of time chasing down a DirectShow "no intermediate filter combination" error because I had forgotten to use that value for the sub-type and was using (incorrectly) MEDIASUBTYPE_WAVE instead. MEDIASUBTYPE_WAVE is incompatible with many other filters such as system capture filters and that was the root cause of the failure. The bigger lesson here is if you are debugging an inter-Filter media format negotiation error, make sure that the formats between the pins being connected are completely equal. I made the mistake during initial debugging of only comparing the WAV format parameters (format tag, number of channels, bits per sample, sample rate) which were identical between the pins. However, the difference in sub-type due to my improper usage of MEDIASUBTYPE_WAVE caused the pin connection to fail. As soon as I changed the sub-type to MEDIASUBTYPE_PCM as Roman suggested the problem went away.
(1) is MEDIATYPE_Audio.
(2) is typically a mapping from FOURCC code into GUID, see Media Types, Audio Media Types section.
(3) is FORMAT_WaveFormatEx.
(4) is a pointer (typically allocated by COM task memory allocator API) to WAVEFORMATEX structure.
1) - yes you should allocate memory, put valid data there, by copying or initializing directly, and put this pointer to pbFormat and structure size into cbFormat.
2) - yes it looks good, it is defined like this in first place: WAVEFORMATEX structure.
I have a Client/Server architecture (C# .Net 4.0) that send's command packets of data as byte arrays. There is a variable number of parameters in any command, and each paramater is of variable length. Because of this I use delimiters for the end of a parameter and the command as a whole. The operand is always 2 bytes and both types of delimiter are 1 byte. The last parameter_delmiter is redundant as command_delmiter provides the same functionality.
The command structure is as follow:
FIELD SIZE(BYTES)
operand 2
parameter1 x
parameter_delmiter 1
parameter2 x
parameter_delmiter 1
parameterN x
.............
.............
command_delmiter 1
Parameters are sourced from many different types, ie, ints, strings etc all encoded into byte arrays.
The problem I have is that sometimes parameters when encoded into byte arrays contain bytes that are the same value as a delimiter. For example command_delmiter=255.. and a paramater may have that byte inside of it.
There is 3 ways I can think of fixing this:
1) Encode the parameters differently so that they can never be the same value as a delimiter (255 and 254) Modulus?. This will mean that paramaters will become larger, ie Int16 will be more than 2 bytes etc.
2) Do not use delimiters at all, use count and length values at the start of the command structure.
3) Use something else.
To my knowledge, the way TCP/IP buffers work is that SOME SORT of delimiter has to be used to seperate 'commands' or 'bundles of data' as a buffer may contain multiple commands, or a command may span multiple buffers.. So this
BinaryReader / Writer seems like an obvious candidate, the only issue is that the byte array may contain multiple commands ( with parameters inside). So the byte array would still have to be chopped up in order to feel into the BinaryReader.
Suggestions?
Thanks.
The standard way to do this is to have the length of the message in the (fixed) first few bytes of a message. So you could have the first 4 bytes to denote the length of a message, read those many bytes for the content of the message. The next 4 bytes would be the length of the next message. A length of 0 could indicate end of messages. Or you could use a header with a message count.
Also, remember TCP is a byte stream, so don't expect a complete message to be available every time you read data from a socket. You could receive an arbitrary number of bytes at ever read.