Wireshark dissect information but not display in the dissect - wireshark

This may have been explained elsewhere, but not finding it. I have to work within the confines of wireshark 2.4.x.
So I defined some values for data as so.
{ &hf_td_timestamp,
{ "Timestamp", "td.timestamp",
FT_ABSOLUTE_TIME, ABSOLUTE_TIME_LOCAL, NULL, 0x0, NULL, HFILL
} },
{ &hf_td_timestamp_sec,
{ "Timestamp Seconds", "td.timestamp.sec",
FT_UINT64, BASE_DEC, NULL, 0x0, NULL, HFILL
} },
{ &hf_td_timestamp_nsec,
{ "Timestamp nSeconds", "td.timestamp.nsec",
FT_UINT32, BASE_DEC, NULL, 0x0, NULL, HFILL
} },
and the data for one of them gets stored and added to the dissect tree as so
proto_tree_add_item(td_tree, hf_td_timestamp, tvb, offset, 8, ENC_TIME_TIMESPEC);
I only want to display the one line item and not all three. The information for the other two are of course derived from the same bytes. Ultimately I would like to have the other fields available for adding to the columns and not the detail dissect.
That is part 1. Once I can establish the storing of the information I will of course add the other two as a single line into the detail as seconds.nanoseconds. The values just need to be stored separately so that the data can be parsed in an Excel csv file. Excel cannot handle the precision of the nanoseconds in decimal format, that is why they must be separate.
Part 2: store some metadata that is calculated from known fields. Specifically the delta between these timestamps. Wireshark can give the delta of the recorded timestamp but not the timestamp within the payload. So basically store the delta between the payload timestamp with the same port information as the last packet from the same port. Once I can get past part 1 then I should be able to accomplish part 2.
So, is there a function that will parse the tvb and only store the value as opposed to store to be displayed?

proto_tree_add_item(td_tree, hf_td_timestamp, tvb, offset, 8, ENC_TIME_TIMESPEC);
proto_item * ti_sec = proto_tree_add_item(td_tree, hf_td_timestamp_sec, tvb, offset-4, 8, ENC_BIG_ENDIAN);
proto_item * ti_nsec = proto_tree_add_item(td_tree, hf_td_timestamp_nsec, tvb, offset+4, 8, ENC_BIG_ENDIAN);
PROTO_ITEM_SET_HIDDEN(ti_sec);
PROTO_ITEM_SET_HIDDEN(ti_nsec);
If the proto_tree_add_item() appears in the PROTO_ITEM_SET_HIDDEN() then the item will be displayed. PROTO_ITEM_SET_HIDDEN() works on previously declared items not currently created items.As for part 2:The metadata was created by generating a GHashTable for each new metadata item to hold the payload timestamps for each of the line numbers. (If there is a better way I would like to know. Like access the completed list if stored in wireshark as opposed to creating my own). Generate my timestamp deltas and convert the values into a tvb, then read the tvb back to store in the fields.
GHashTable * timestamp_map = NULL;
GHashTable * timestamp_delta = NULL;
.
.
.
// Store the timestamp (key = linenum, value = timestamp)
timestamp_map = g_hash_table_new_full(g_direct_hash, g_direct_equal, NULL, g_free);
// Store the list of line numbers from the associated client
timestamp_delta = g_hash_table_new_full(g_direct_hash, g_direct_equal, NULL, g_list_free);
.
.
.
// find your deltas through your lookups
// convert to tvb and then to fields
// Create a new TVB to delta
tvbuff_t *tvbtmp = tvb_new_real_data(vals.buffer, 8, 8);
// Store in the field hf_tu_timestamp_delta
proto_tree_add_item(tree, hf_tu_timestamp_delta, tvbtmp, 0, 8, ENC_LITTLE_ENDIAN);
NOTE: For very large captures the memory usage will be large (very large and expensive).

Related

How to get VWAP using DolphinDB TimeSeriesEngine or ReactiveStateEngine

I am getting live tick data consisting of Time, Symbol Name, Last Traded Price, Cumulative Volume (Daily).
Now how to get VWAP using 1) Custom function 2) TimeSeriesEngine 3) ReactiveStateEngine with DolphinDB? Please Help me. Necessary code is as under.
This is stream table for getting ticks from python
t_colNames=`ts`symbol`price`vol`upd_tick
t_colTypes=`TIMESTAMP`SYMBOL`DOUBLE`DOUBLE`TIMESTAMP
This is stream table to store 1 min OHLC data
ohlc_colNames=`ts`symbol`open`high`low`close`volume`tp`last_tick`upd_1m
ohlc_colTypes=`TIMESTAMP`SYMBOL`DOUBLE`DOUBLE`DOUBLE`DOUBLE`DOUBLE`DOUBLE`TIMESTAMP`TIMESTAMP
This is 1 min OHLC TimeSeriesEngine
OHLC_sm1 = createTimeSeriesEngine(name="OHLC_sm1", windowSize=60000, step=60000, metrics=<[first(price) as open, max(price) as high, min(price) as low, last(price) as close, sum(vol) as volume, (max(price)+min(price)+last(price))/3 as tp, last(upd_tick) as last_tick, now() as upd_1m]>, dummyTable=tmp, outputTable=sm1 , timeColumn=`ts, useSystemTime=true, keyColumn=`symbol, updateTime=60000, useWindowStartTime=false);
This is the function to convert cumulative volume to volume
def calcVolume(mutable dictVolume, mutable tsAggrOHLC, msg){
t = select ts,symbol,price,vol,upd_tick from msg context by symbol limit -1
update t set prevVolume = dictVolume[symbol]
dictVolume[t.symbol] = t.vol
tsAggrOHLC.append!(t.update!("vol", <vol-prevVolume>))
}
dictVol = dict(STRING, DOUBLE)
subscribeTable(tableName="t", actionName="OHLC_sm1", offset=0, handler=calcVolume{dictVol,OHLC_sm1}, msgAsTable=true, hash=1)
I recommend using ReactiveStateEngine to convert cumulative volume to volume and then connecting two engines in series. Here is an example:
tradesData = your_tick_data
//define Trade Table
x=tradesData.schema().colDefs
share streamTable(100:0, x.name, x.typeString) as Trade
//define OHLC outputTable
share streamTable(100:0, `datetime`symbol`open`high`low`close`volume`updatetime,[TIMESTAMP,SYMBOL,DOUBLE,DOUBLE,DOUBLE,DOUBLE,LONG,TIMESTAMP]) as OHLC
//1 min OHLC TimeSeriesEngine
tsAggrOHLC = createTimeSeriesAggregator(name="aggr_ohlc", windowSize=60000, step=60000, metrics=<[first(Price),max(Price),min(Price),last(Price),wavg(Price,Volume),now()]>, dummyTable=Trade, outputTable=OHLC, timeColumn=`Datetime, keyColumn=`Symbol)
//ReactiveStateEngine:convert cumulative volume to volume
rsAggrOHLC = createReactiveStateEngine(name="calc_vol", metrics=<[Datetime, Price, deltas(Volume) as Volume]>, dummyTable=Trade, outputTable=tsAggrOHLC, keyColumn=`Symbol)
//subscribe table and insert data into engines
subscribeTable(tableName="Trade", actionName="minuteOHLC2", offset=0, handler=append!{rsAggrOHLC}, msgAsTable=true)
replay(inputTables=tradesData, outputTables=Trade, dateColumn=`Datetime)
You can use user-defined functions in any of the engine's matrics.

Parsing cbor stream

I'm trying to parse CBOR stream using tinyCBOR. Goal is to write a generic parsing code for "map type"(because i don't know how many keys are there and which are they, in cbor stream)but not for a json, I just want to get values using "key",but for getting values from key i have to know the key.
Im simply able to parse the value by passing "key" in function
cbor_value_map_find_value(&main_value,"Age",&map_value);
but few things still not clear to me.
What sequence to follow, for getting key and values from CBOR stream?
For eg. following is my data in map format -
{"Roll_number": 7, "Age": 24, "Name": "USER"}
here is this binary format from cbor.me link -
A3 # map(3)
6B # text(11)
526F6C6C5F6E756D626572 # "Roll_number"
07 # unsigned(7)
63 # text(3)
416765 # "Age"
18 18 # unsigned(24)
64 # text(4)
4E616D65 # "Name"
64 # text(4)
55534552 # "USER"
1.How to get key from stream. like - Roll_number or AGE from stream?(sequentially getting key and values also fine).
2.After getting Roll_number value, how can i jump to next element ("Age") for getting "key" and "value".
3.How to identify that i'm reached at the "end of stream" and now there is no data ??
Any snippet code, that how to parse and which sequence of function need to follow.
Any help is appreciate.
Thanks!!!
The example code is pretty helpful for understanding the API. To iterate over the keys and values of a map, you call cbor_value_enter_container, then cbor_value_advance until cbor_value_at_end returns true (as long as there are no nested maps or arrays you want to look inside). For example:
cbor_parser_init(input, sizeof(input), 0, &parser, &it);
if (!cbor_value_is_map(&it)) {
return 1;
}
err = cbor_value_enter_container(&it, &map);
if (err) return 1;
while (!cbor_value_at_end(&map)) {
// get the key. Remember, keys don't have to be strings.
if (!cbor_value_is_text_string(&map)) {
return 1;
}
char *buf;
size_t n;
// Note: this also advances to the value
err = cbor_value_dup_text_string(&map, &buf, &n, &map);
if (err) return 1;
printf("Key: '%*s'\n", (int)n-1, buf);
if (strncmp(buf, "Age", n-1) == 0) {
if (cbor_value_is_integer(&map)) {
// Found the expected key and value type
err = cbor_value_get_int(&map, &val);
if (err) return 1;
printf("age: %d\n", val);
}
// note: can't break here, have to keep going until the end if you want
// `it` to still be valid.
}
free(buf);
err = cbor_value_advance(&map);
if (err) return 1;
}
err = cbor_value_leave_container(&it, &map);
if (err) return 1;

How do you create wireshark dissector subtrees programmatically based on a protocol value?

How do you create a Wireshark dissector in C that can parse a protocol to determine the number of sub-trees to create and then programmatically create that number of sub-trees.
I am working with a protocol that has 2 parts
1) the number of requests in the packet (2bytes long )
2) followed by those requests ( 1 to 20 requests each 5 bytes long)
Each request is 5 bytes long
2 bytes - Request number
3 bytes - random number
here is an example packet
020155502666
This packet dissected is
Length : 02
request 01:
random number : 555
request 02:
random number: 666
This code allows me to create a master tree with a single sub-tree which I add the request number and random number to. How do I change it to programmatically create a sub tree under the master tree for each request object.
dissect_packet (tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree, void *data _U_)
{
/* Set up structures needed to add the protocol subtree and manage it */
proto_item *ti
proto_tree *tree , *tree_sub
/* Other misc. local variables. */
guint offset = 0;
guint16 number_of_requests;
/* create display subtree for the protocol */
ti = proto_tree_add_item(tree, tree, tvb, 0, -1, ENC_NA);
packet_tree = proto_item_add_subtree(ti, tree_sub);
proto_tree_add_item(packet_tree, hf_requestID, tvb, offset, 2, ENC_BIG_ENDIAN);
offset += 2;
proto_tree_add_item(packet_tree, hf_random_number, tvb, offset, 3, ENC_BIG_ENDIAN);
offset +=3;
}

Single array in the hdf5 file

Image of my dataset:
I am using the HDF5DotNet with C# and I can read only full data as the attached image in the dataset. The hdf5 file is too big, up to nearly 10GB, and if I load the whole array into the memory then it will be out of memory.
I would like to read all data from rows 5 and 7 in the attached image. Is that anyway to read only these 2 rows data into memory in a time without having to load all data into memory first?
private static void OpenH5File()
{
var h5FileId = H5F.open(#"D:\Sandbox\Flood Modeller\Document\xmdf_results\FMA_T1_10ft_001.xmdf", H5F.OpenMode.ACC_RDONLY);
string dataSetName = "/FMA_T1_10ft_001/Temporal/Depth/Values";
var dataset = H5D.open(h5FileId, dataSetName);
var space = H5D.getSpace(dataset);
var dataType = H5D.getType(dataset);
long[] offset = new long[2];
long[] count = new long[2];
long[] stride = new long[2];
long[] block = new long[2];
offset[0] = 1; // start at row 5
offset[1] = 2; // start at column 0
count[0] = 2; // read 2 rows
count[0] = 165701; // read all columns
stride[0] = 0; // don't skip anything
stride[1] = 0;
block[0] = 1; // blocks are single elements
block[1] = 1;
// Dataspace associated with the dataset in the file
// Select a hyperslab from the file dataspace
H5S.selectHyperslab(space, H5S.SelectOperator.SET, offset, count, block);
// Dimensions of the file dataspace
var dims = H5S.getSimpleExtentDims(space);
// We also need a memory dataspace which is the same size as the file dataspace
var memspace = H5S.create_simple(2, dims);
double[,] dataArray = new double[1, dims[1]]; // just get one array
var wrapArray = new H5Array<double>(dataArray);
// Now we can read the hyperslab
H5D.read(dataset, dataType, memspace, space,
new H5PropertyListId(H5P.Template.DEFAULT), wrapArray);
}
You need to select a hyperslab which has the correct offset, count, stride, and block for the subset of the dataset that you wish to read. These are all arrays which have the same number of dimensions as your dataset.
The block is the size of the element block in each dimension to read, i.e. 1 is a single element.
The offset is the number of blocks from the start of the dataset to start reading, and count is the number of blocks to read.
You can select non-contiguous regions by using stride, which again counts in blocks.
I'm afraid I don't know C#, so the following is in C. In your example, you would have:
hsize_t offset[2], count[2], stride[2], block[2];
offset[0] = 5; // start at row 5
offset[1] = 0; // start at column 0
count[0] = 2; // read 2 rows
count[1] = 165702; // read all columns
stride[0] = 1; // don't skip anything
stride[1] = 1;
block[0] = 1; // blocks are single elements
block[1] = 1;
// This assumes you already have an open dataspace with ID dataspace_id
H5Sselect_hyperslab(dataspace_id, H5S_SELECT_SET, offset, stride, count, block)
You can find more information on reading/writing hyperslabs in the HDF5 tutorial.
It seems there are two forms of H5D.read in C#, you want the second form:
H5D.read(Type) Method (H5DataSetId, H5DataTypeId, H5DataSpaceId,
H5DataSpaceId, H5PropertyListId, H5Array(Type))
This allows you specify the memory and file dataspaces. Essentially, you need one dataspace which has information about the size, stride, offset, etc. of the variable in memory that you want to read into; and one dataspace for the dataset in the file that you want to read from. This lets you do things like read from a non-contiguous region in a file to a contiguous region in an array in memory.
You want something like
// Dataspace associated with the dataset in the file
var dataspace = H5D.get_space(dataset);
// Select a hyperslab from the file dataspace
H5S.selectHyperslab(dataspace, H5S.SelectOperator.SET, offset, count);
// Dimensions of the file dataspace
var dims = H5S.getSimpleExtentDims(dataspace);
// We also need a memory dataspace which is the same size as the file dataspace
var memspace = H5S.create_simple(rank, dims);
// Now we can read the hyperslab
H5D.read(dataset, datatype, memspace, dataspace,
new H5PropertyListId(H5P.Template.DEFAULT), wrapArray);
From your posted code, I think I've spotted the problem. First you do this:
var space = H5D.getSpace(dataset);
then you do
var dataspace = H5D.getSpace(dataset);
These two calls do the same thing, but create two different variables
You call H5S.selectHyperslab with space, but H5D.read uses dataspace.
You need to make sure you are using the correct variables consistently. If you remove the second call to H5D.getSpace, and change dataspace -> space, it should work.
Maybe you want to have a look at HDFql as it abstracts yourself from the low-level details of HDF5. Using HDFql in C#, you can read rows #5 and #7 of dataset Values using a hyperslab selection like this:
float [,]data = new float[2, 165702];
HDFql.Execute("SELECT FROM Values(5:2:2:1) INTO MEMORY " + HDFql.VariableTransientRegister(data));
Afterwards, you can access these rows through variable data. Example:
for(int x = 0; x < 2; x++)
{
for(int y = 0; y < 165702; y++)
{
System.Console.WriteLine(data[x, y]);
}
}

BlackBerry Decryption - BadPaddingException

I have successfully encrypted data in BlackBerry in AES format. In order to verify my result, I am trying to implement decryption in BlackBerry using the following method:
private static byte[] decrypt( byte[] keyData, byte[] ciphertext )throws CryptoException, IOException
{
// First, create the AESKey again.
AESKey key = new AESKey( keyData );
// Now, create the decryptor engine.
AESDecryptorEngine engine = new AESDecryptorEngine( key );
// Since we cannot guarantee that the data will be of an equal block length
// we want to use a padding engine (PKCS5 in this case).
PKCS5UnformatterEngine uengine = new PKCS5UnformatterEngine( engine );
// Create the BlockDecryptor to hide the decryption details away.
ByteArrayInputStream input = new ByteArrayInputStream( ciphertext );
BlockDecryptor decryptor = new BlockDecryptor( uengine, input );
// Now, read in the data. Remember that the last 20 bytes represent
// the SHA1 hash of the decrypted data.
byte[] temp = new byte[ 100 ];
DataBuffer buffer = new DataBuffer();
for( ;; ) {
int bytesRead = decryptor.read( temp );
buffer.write( temp, 0, bytesRead );
if( bytesRead < 100 ) {
// We ran out of data.
break;
}
}
byte[] plaintextAndHash = buffer.getArray();
int plaintextLength = plaintextAndHash.length - SHA1Digest.DIGEST_LENGTH;
byte[] plaintext = new byte[ plaintextLength ];
byte[] hash = new byte[ SHA1Digest.DIGEST_LENGTH ];
System.arraycopy( plaintextAndHash, 0, plaintext, 0, plaintextLength );
System.arraycopy( plaintextAndHash, plaintextLength, hash, 0,
SHA1Digest.DIGEST_LENGTH );
// Now, hash the plaintext and compare against the hash
// that we found in the decrypted data.
SHA1Digest digest = new SHA1Digest();
digest.update( plaintext );
byte[] hash2 = digest.getDigest();
if( !Arrays.equals( hash, hash2 )) {
throw new RuntimeException();
}
return plaintext;
}
I get an exception thrown "BadPaddingException" at the following line
int bytesRead = decryptor.read( temp );
Can anybody please help.
I think the problem might be in this block:
for( ;; ) {
int bytesRead = decryptor.read( temp );
buffer.write( temp, 0, bytesRead );
if( bytesRead < 100 ) {
// We ran out of data.
break;
}
}
When read returns -1, you are also writing it to the buffer. And the exit condition is also wrong. Compare that to the block in CryptoDemo sample project:
for( ;; ) {
int bytesRead = decryptor.read( temp );
if( bytesRead <= 0 )
{
// We have run out of information to read, bail out of loop
break;
}
db.write(temp, 0, bytesRead);
}
Also there are a few points you should be careful about, even if they are not causing the error:
AESDecryptorEngine engine = new AESDecryptorEngine( key );
If you read the docs for this constructor, it says:
"Creates an instance of the AESEncryptorEngine class given the AES key
with a default block length of 16 bytes."
But in the previous line, when you create the key, you are doing this:
AESKey key = new AESKey( keyData );
Which according to the docs, it "Creates the longest key possible from existing data.", BUT only "the first 128 bits of the array are used". So it does not matter what length your keyData has, you will always be using a 128 bit key length, which is the shortest of the 3 available sizes (128, 192, 256).
Instead, you could explicitly select the algorithm block key length. For instance, to use AES-256:
AESKey key = new AESKey(keyData, 0, 256); //key length in BITS
AESDecryptorEngine engine = new AESDecryptorEngine(key, 32); //key lenth IN BYTES
Finally, even if you get this working, you should be aware that directly deriving the key from the password (which might be of an arbitrary size) is not secure. You could use PKCS5KDF2PseudoRandomSource to derive an stronger key from the key material (password), instead of just using PKCS5 for padding.
Your encrypted data should be correctly padded to the block size (16 bytes).
Try to decrypt the data without padding, and see if tail bytes correspond to PKCS#5 padding (for instance, if it was needed 5 bytes of padding, it should be appended with 0x05 0x05 0x05 0x05 0x05 bytes).
The problem is that any data with the correct block size will decrypt. The issue with that is that it will likely decrypt to random looking garbage. Random looking garbage is not often compatible with the PKCS#7 padding scheme, hence the exception.
I say problem because this exception may be thrown if the key data is invalid, if the wrong padding or block mode was used or simply if the input data was garbled during the process. The best way to debug this is to make 100% sure that the algorithms match, and that the binary input parameters (including default ones by the API) match precisely on both sides.

Resources