Wireshark MATE: Calculate response time - wireshark

I'm just trying to use MATE to calculate response time between each SMPP submit_sm and submit_sm_resp, this is the mate script I'm using:
Pdu smpp_pdu Proto smpp Transport mate {
Extract cmd From smpp.command_id;
Extract seq From smpp.sequence_number;
};
Gop smpp_session On smpp_pdu Match (seq) {
Start (cmd=4);
Stop (cmd=2147483652);
};
Done;
So basically, it exacts command id and sequence numbers, then in Gop uses command id for start/stop
4 = 0x00000004 = SUBMIT_SM
2147483652 = 0x80000004 = SUBMIT_SM_RESP
This should do the trick. But, now what?
I added a column with Delta Time Displayed, and this should show the response time for each submit_sm_resp, but this is not using MATE, just calculate the time between each previous packet:
How can I use MATE script?
If I use the following filter in a specific column:
mate.smpp_pdu.RelativeTime
I only got the seconds, for each packet, from starting trace:
As far I understood, MATE should setup time between START and STOP, but which is the filter I should use?
This doesn't shown anything:
mate.smpp_session.Time
Please advise,
Thank you,
Lucas

Find solution! I posted here just in case someone need it:
Because cmd is a string, the correct Gop will be:
Start (cmd="0x00000004");
Stop (cmd="0x80000004");
This will did the trick.

Related

Matching TOTP implementation with Google Authenticator

(Solution) TL;DR: Google assumes the key string is base32 encoded; replacing any 1 with I and 0 with O. This must be decoded prior to hashing.
Original Question
I'm having difficulty having my code match up with GA. I even went chasing down counters +/- ~100,000 from the current time step and found nothing. I was very excited to see my function pass the SHA-1 tests in the RFC 6238 Appendix, however when applied to "real life" it seems to fail.
I went so far as to look at the open source code for Google Authenticator at Github (here). I used the key for testing: "qwertyuiopasdfgh". According to the Github code:
/*
* Return key entered by user, replacing visually similar characters 1 and 0.
*/
private String getEnteredKey() {
String enteredKey = keyEntryField.getText().toString();
return enteredKey.replace('1', 'I').replace('0', 'O');
}
I believe my key would not be modified. Tracing through the files it seems the key remains unchanged through calls: AuthenticatorActivity.saveSecret() -> AccountDb.add() -> AccountDb.newContentValuesWith().
I compared my time between three sources:
(erlang shell): now()
(bash): date "+%s"
(Google/bash): pattern="\s*date\:\s*"; curl -I https://www.google.com 2>/dev/null | grep -iE $pattern | sed -e "s/$pattern//g" | xargs -0 date "+%s" -d
They are all the same. Despite that, it appears my phone is a bit off from my computer. It will change steps not in sync with my computer. However me trying to chase down the proper time step by +/- thousands didn't find anything. According to the NetworkTimeProvider class, that is the time source for the app.
This code worked with all the SHA-1 tests in the RFC:
totp(Secret, Time) ->
% {M, S, _} = os:timestamp(),
Msg = binary:encode_unsigned(Time), %(M*1000000+S) div 30,
%% Create 0-left-padded 64-bit binary from Time
Bin = <<0:((8-size(Msg))*8),Msg/binary>>,
%% Create SHA-1 hash
Hash = crypto:hmac(sha, Secret, Bin),
%% Determine dynamic offset
Offset = 16#0f band binary:at(Hash,19),
%% Ignore that many bytes and store 4 bytes into THash
<<_:Offset/binary, THash:4/binary, _/binary>> = Hash,
%% Remove sign bit and create 6-digit code
Code = (binary:decode_unsigned(THash) band 16#7fffffff) rem 1000000,
%% Convert to text-string and 0-lead-pad if necessary
lists:flatten(string:pad(integer_to_list(Code),6,leading,$0)).
For it to truly match the RFC it would need to be modified for 8-digit numbers above. I modified it to try and chase down the proper step. The goal was to figure out how my time was wrong. Didn't work out:
totp(_,_,_,0) ->
{ok, not_found};
totp(Secret,Goal,Ctr,Stop) ->
Msg = binary:encode_unsigned(Ctr),
Bin = <<0:((8-size(Msg))*8),Msg/binary>>,
Hash = crypto:hmac(sha, Secret, Bin),
Offset = 16#0f band binary:at(Hash,19),
<<_:Offset/binary, THash:4/binary, _/binary>> = Hash,
Code = (binary:decode_unsigned(THash) band 16#7fffffff) rem 1000000,
if Code =:= Goal ->
{ok, {offset, 2880 - Stop}};
true ->
totp(Secret,Goal,Ctr+1,Stop-1) %% Did another run with Ctr-1
end.
Anything obvious stick out?
I was tempted to make my own Android application to implement TOTP for my project. I did continue looking at the Java code. With aid of downloading the git repository and grep -R to find function calls I discovered my problem. To get the same pin codes as Google Authenticator the key is assumed to be base32 encoded and must be decoded prior to passing it to the hash algorithm.
There was a hint of this in getEnteredKey() by replacing the 0 and 1 characters as these are not present in the base32 alphabet.

Asterisk PBX - Infinite Loop when user disconnects while using 'Read' application from LUA

I'm configuring interactive dial plans for asterisk at the moment and because I already know some LUA I thought it'd be easier to go that route.
I have a start extension like this:
["h"] = function(c,e)
app.verbose("Hung Up")
end;
["s"] = function(c, e)
local d = 0
while d == 0 do
say:hello()
app.read("read_result", nil, 1)
d = channel["read_result"].value;
if d == 1 then
say:goodbye()
elseif d == 2 then
call:forward('front desk')
end
d = 0
end
say:goodbye()
end;
As you can see, I want to repeat the instructions say:hello() whenever
the user gives an invalid answer. However, if the user hangs up while
app.read waits for their answer, asterisk ends up in an infinite loop
since d will always be nil.
I WOULD check for d==nil to detect disconnection, but nil also shows
up when the user just presses the # pound sign during app.read.
So far I've taken to using for loops instead of while to limit the
maximum iterations that way, but I'd rather find out how to detect a disconnected
channel. I can't find any documentation on that though.
I also tried setting up a h extension, but the program won't go to it when the
user hangs up.
Asterisk Verbose Output:
[...]
-- Executing [s#test-call:1] read("PJSIP/2300-00000004", "read_result,,1") │ test.lua:3: in main chunk
-- Accepting a maximum of 1 digit. │ [C]: ?
-- User disconnected │root#cirro asterisk lua test.lua
-- Executing [s#test-call:1] read("PJSIP/2300-00000004", "read_result,,1") │Global B
-- Accepting a maximum of 1 digit. │LocalB-B->a
-- User disconnected │LocalB-A
-- Executing [s#test-call:1] read("PJSIP/2300-00000004", "read_result,,1") │LocalB-A
-- Accepting a maximum of 1 digit. │LocalB-A
-- User disconnected │root#cirro asterisk cp ~/test.call /var/spool/asterisk/outgoing
-- Executing [s#test-call:1] read("PJSIP/2300-00000004", "read_result,,1")
[...]
Thanks for any help you might be able to offer.
First of all you can see in app_read docs(and any other doc), that it return different values for incorrect execution(when channel is down).
Also this exact app offer simplified way of determine result:
core show application Read
-= Info about application 'Read' =-
[Synopsis]
Read a variable.
[Description]
Reads a #-terminated string of digits a certain number of times from the user
in to the given <variable>.
This application sets the following channel variable upon completion:
${READSTATUS}: This is the status of the read operation.
OK
ERROR
HANGUP
INTERRUPTED
SKIPPED
TIMEOUT
If that still not suite you, you can direct ask asterisk about CHANNEL(state)
PS You NEVER should write dialplan or any other program with infinite loop. Count your loops and exit at 10+. This will save ALOT of money for client.

Lua: Working with the Modbus TCP/IP Protocol

This question is based off a previous question I asked concerning a similar topic: Lua: Working with Bit32 Library to Change States of I/O's . I'm trying to use a Lua program that, when a PLC changes the state of a coil at a given address (only two addresses will be used) then it triggers a reaction in another piece of equipment. I have some code that is basically the exact same as my previous topic. But this has to do with what this code is actually doing and not so much the bit32 library. Usually I run code I don't in understand in my Linux IDE and slowly make changes until I finally can make sense of it. But this is producing some weird reactions that I can't make sense of.
Code example:
local unitId = 1
local funcCodes = {
readCoil = 1,
readInput = 2,
readHoldingReg = 3,
readInputReg = 4,
writeCoil = 5,
presetSingleReg = 6,
writeMultipleCoils = 15,
presetMultipleReg = 16
}
local function toTwoByte(value)
return string.char(value / 255, value % 255)
end
local coil = 1
local function readCoil(s, coil)
local req = toTwoByte(0) .. toTwoByte(0) .. toTwoByte(6) .. string.char(unitId, funcCodes.readCoil) .. toTwoByte(coil - 1) .. toTwoByte(1)
s:write(req) --(s is the address of the I/O module)
local res = s:read(10)
return res:byte(10) == 1 -- returns true or false if the 10th bit is ==1 I think??? Please confirm
end
The line that sets local req is the part I'm truly not making sense of. Because of my earlier post, I understand fully about the toTwoByte function and was quickly refreshed on bits & byte manipulation (truly excellent by the way). But that particular string is the reason for this confusion. If I run this in the demo at lua.org I get back an error "lua number has no integer representation". If I separate it into the following I am given back ascii characters that represent those numbers (which I know string.char returns the ascii representation of a given digit). If I run this in my Linux IDE, it displays a bunch of boxes, each containing four digits; two on top of the other two. Now it is very hard to distinguish all of the boxes and their content as they are overlapping.
I know that there is a modbus library that I may be able to use. But I would much rather prefer to understand this as I'm fairly new to programming in general.
Why do I receive different returned results from Windows vs Linux?
What would that string "local req" look like when built at this point to the I/O module. And I don't understand how this req variable translates into the proper string that contains all of the information used to read/write to a given coil or register.
If anyone needs better examples or has further questions that I need to answer, please let me know.
Cheers!
ETA: This is with the Modbus TCP/IP Protocol, not RTU. Sorry.

Getting Random number from 0 to 1000 daily iOS

I have a code to generate random numbers from 0 to 1000, its is the following
int randomValue = (random() % 1000);
I want the app to generate it daily in a specific time for example at 10:00 am.
How can I do that??
I would like to answer : don't do that, you have an iDevice, not a 24/24 7/7 server. But perform this on a server side, and when required, ask your server for the values...

How to Choose a Port Number?

I'm writing a program which uses ZeroMQ to communicate with other running programs on the same machine. I want to choose a port number at run time to avoid the possibility of collisions. Here is an example of a piece of code I wrote to accomplish this.
#!/usr/bin/perl -Tw
use strict;
use warnings;
my %in_use;
{
local $ENV{PATH} = '/bin:/usr/bin';
%in_use = map { $_ => 1 } split /\n/, qx(
netstat -aunt |\
awk '{print \$4}' |\
grep : |\
awk -F: '{print \$NF}'
);
}
my ($port) = grep { not $in_use{$_} } 50_000 .. 59_999;
print "$port is available\n";
The procedure is:
invoke netstat -aunt
parse the result
choose the first port on a fixed range which doesn't appear on netstat list.
Is there a system utility better suited to accomplishing this?
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
port_selected = socket.bind_to_random_port('tcp://*', min_port=6001, max_port=6004, max_tries=100)
First of all, from your code it looks like you are trying to choose a port between 70000 and 79999. You do know that port numbers only go up to 65535, right? :-)
You can certainly do it this way, even though there are a couple of problems with the approach. The first problem is that netstat output differs between different operating systems so it's hard to do it portably. The second problem is that you still need to wrap the code in a loop which tries again to find a new port number in case it was not possible to bind to the chosen port number, because there's a race condition between ascertaining that the port is free and actually binding to it.
If the library you are using allows you to specify the port number as 0 and allows you to call getsockname() on the socket after it is bound, then you should just do that. Using 0 makes the system choose any free port number, and with getsockname() you can find out which port it chose.
Failing that, it would probably actually be more efficient to not bother calling netstat and just try to find to different port numbers in a loop. If you succeed, break from the loop. If you fail, increment the port number by 1, go back, and try again.

Resources