decodingTCAP message - dialoguePortion - wireshark

I'm writing an simulator (for learning purposes) for complete M3UA-SCCP-TCAP-MAP stack (over SCTP). So far M3UA+SCCP stacks are OK.
M3UA Based on the RFC 4666 Sept 2006
SCCP Based on the ITU-T Q.711-Q716
TCAP Based on the ITU-T Q.771-Q775
But upon decoding TCAP part I got lost on dialoguePortion.
TCAP is asn.1 encoded, so everything is tag+len+data.
Wireshark decode it differently than my decoder.
Message is:
62434804102f00676b1e281c060700118605010101a011600f80020780a1090607040000010005036c1ba1190201010201163011800590896734f283010086059062859107
Basically, my message is BER-decoded as
Note: Format: hex(tag) + (BER splitted to CLS+PC+TAG in decimal) + hex(data)
62 ( 64 32 2 )
48 ( 64 0 8 ) 102f0067
6b ( 64 32 11 )
28 ( 0 32 8 )
06 ( 0 0 6 ) 00118605010101 OID=0.0.17.773.1.1.1
a0 ( 128 32 0 )
60 ( 64 32 0 )
80 ( 128 0 0 ) 0780
a1 ( 128 32 1 )
06 ( 0 0 6 ) 04000001000503 OID=0.4.0.0.1.0.5.3
6c ( 64 32 12 )
...
So I can see begin[2] message containing otid[8], dialogPortion[11] and componentPortion[12].
otid and ComponentPortion are decoded correctly. But not dialogPortion.
ASN for dialogPortion does not mention any of these codes.
Even more confusing, wireshark decode it differently (oid-as-dialogue is NOT in the dialoguePortion, but as a field after otid, which is NOT as described in ITU-T documentation - or not as I'm understanding it)
Wireshark decoded Transaction Capabilities Application Part
begin
Source Transaction ID
otid: 102f0067
oid: 0.0.17.773.1.1.1 (id-as-dialogue)
dialogueRequest
Padding: 7
protocol-version: 80 (version1)
1... .... = version1: True
application-context-name: 0.4.0.0.1.0.5.3 (locationInfoRetrievalContext-v3)
components: 1 item
...
I can't find any reference for Padding in dialoguePDU ASN.
Can someone point me in the right direction?
I would like to know how to properly decode this message
DialoguePDU format should be simple in this case:
dialogue-as-id OBJECT IDENTIFIER ::= {itu-t recommendation q 773 as(1) dialogue-as(1) version1(1)}
DialoguePDU ::= CHOICE {
dialogueRequest AARQ-apdu,
dialogueResponse AARE-apdu,
dialogueAbort ABRT-apdu
}
AARQ-apdu ::= [APPLICATION 0] IMPLICIT SEQUENCE {
protocol-version [0] IMPLICIT BIT STRING {version1(0)} DEFAULT {version1},
application-context-name [1] OBJECT IDENTIFIER,
user-information [30] IMPLICIT SEQUENCE OF EXTERNAL OPTIONAL
}

Wireshark is still wrong :-). But then... that is display. It displays values correctly - only in the wrong section. Probably some reason due to easier decoding.
What I was missing was definition of EXTERNAL[8]. DialoguePortion is declared as EXTERNAL...so now everything makes sense.

For your message, my very own decoder says:
begin [APPLICATION 2] (x67)
otid [APPLICATION 8] (x4) =102f0067h
dialoguePortion [APPLICATION 11] (x30)
EXTERNAL (x28)
direct-reference [OBJECT IDENTIFIER] (x7) =00118605010101h
encoding:single-ASN1-type [0] (x17)
dialogueRequest [APPLICATION 0] (x15)
protocol-version [0] (x2) = 80 {version1 (0) } spare bits= 7
application-context-name [1] (x9)
OBJECT IDENTIFIER (x7) =04000001000503h
components [APPLICATION 12] (x27)
invoke [1] (x25)
invokeID [INTEGER] (x1) =1d (01h)
operationCode [INTEGER] (x1) = (22) SendRoutingInfo
parameter [SEQUENCE] (x17)
msisdn [0] (x5) = 90896734f2h
Nature of Address: international number (1)
Numbering Plan Indicator: unknown (0)
signal= 9876432
interrogationType [3] (x1) = (0) basicCall
gmsc-Address [6] (x5) = 9062859107h
Nature of Address: international number (1)
Numbering Plan Indicator: unknown (0)
signal= 26581970
Now, wireshark's padding 7 and my spare bits=7 both refer to the the protocol-version field, defined in Q.773 as:
AARQ-apdu ::= [APPLICATION 0] IMPLICIT SEQUENCE {
protocol-version [0] IMPLICIT BIT STRING { version1 (0) }
DEFAULT { version1 },
application-context-name [1] OBJECT IDENTIFIER,
user-information [30] IMPLICIT SEQUENCE OF EXTERNAL
OPTIONAL }
the BIT STRING definition, assigns name to just the leading bit (version1)... the rest (7 bits) are not given a name and wireshark consider them as padding

Related

What is wrong with £ symbol when using it in encryptor?

When I use £ symbol in password according with an AES encryptor I get the error
Key length must be 128/192/256 bits
String pass = 'my_cool_password_£..............';
var key = Key.fromUtf8(pass);
var encrypter = Encrypter(AES(key));
encrypter.encrypt(plainText, iv: iv); // error `Key length must be 128/192/256 bits`
Stack trace
Unhandled exception:
Invalid argument(s): Key length must be 128/192/256 bits
#0 AESFastEngine.init (package:pointycastle/block/aes_fast.dart:66:7)
#1 SICStreamCipher.init (package:pointycastle/stream/sic.dart:55:22)
#2 StreamCipherAsBlockCipher.init (package:pointycastle/adapters/stream_cipher_as_block_cipher.dart:27:18)
#3 PaddedBlockCipherImpl.init (package:pointycastle/padded_block_cipher/padded_block_cipher_impl.dart:43:12)
#4 AES.encrypt (package:encrypt/src/algorithms/aes.dart:19:9)
#5 Encrypter.encryptBytes (package:encrypt/src/encrypter.dart:12:19)
#6 Encrypter.encrypt (package:encrypt/src/encrypter.dart:20:12)
the package was used https://pub.dev/packages/encrypt
here is the package encrypt function
Encrypted encrypt(String input, {IV iv}) {
return encryptBytes(convert.utf8.encode(input), iv: iv);
}
Since you are using UTF-8 to represent your password, you need to take into account that not all letters can be represented with only 1 byte (8 bits).
E.g. the £ is represented by using two bytes (16 bits): c2 a3
This can be seen in the following example:
import 'dart:convert';
void main() {
print(utf8.encode('my_cool_password_£..............').length * 8); // 264
print(utf8.encode('my_cool_password_x..............').length * 8); // 256
print(utf8.encode('£').length * 8); // 16
print(utf8.encode('£').map((i) => i.toRadixString(16))); // (c2, a3)
}

Object does not exist in the pdf file structure

%PDF-1.5
...
10737 0 obj
<</MarkInfo<</Marked true>>/Metadata 161 0 R/PageLayout/OneColumn/Pages 10732 0 R/StructTreeRoot 206 0 R/Type/Catalog>>
endobj
10738 0 obj
<</Contents[10740 0 R 10741 0 R 10747 0 R 10748 0 R 10749 0 R 10750 0 R 10751 0 R 10752 0 R]/CropBox[0.0 0.0 516.0 728.64]/MediaBox[0.0 0.0 516.0 728.64]/Parent 10733 0 R/Resources<</ColorSpace<</CS0 10771 0 R/CS1 10772 0 R>>/ExtGState<</GS0 10773 0 R>>/Font<</C2_0 10778 0 R/C2_1 10783 0 R/C2_2 10788 0 R/C2_3 10793 0 R/C2_4 10798 0 R/TT0 10800 0 R/TT1 10802 0 R/TT2 10804 0 R/TT3 10806 0 R/TT4 10808 0 R>>/XObject<</Im0 10769 0 R>>>>/Rotate 0/StructParents 0/Tabs/S/Type/Page>>
endobj
10739 0 obj
<</Filter/FlateDecode/First 410/Length 3756/N 38/Type/ObjStm>>stream
10771 0 10772 21 10773 42 10774 138 10775 190 10776 442 10777 741 10778 752 10779 869 10780 921 10781 1190 10782 2050 10783 2061 10784 2192 10785 2244 10786 2504 10787 3456 10788 3467 10789 3587 10790 3639 10791 3903 10792 6058 10793 6069 10794 6196 10795 6248 10796 6507 10797 8153 10798 8164 10799 8284 10800 8496 10801 9662 10802 9894 10803 11072 10804 11325 10805 11779 10806 11985 10807 13147 10808 13395
[/ICCBased 10753 0 R][/ICCBased 10754 0 R]
<</AIS false/BM/Normal/CA 1.0/OP false/OPM 1/SA true/SMask/None/Type/ExtGState/ca 1.0/op false>>
<</Ordering(Identity)/Registry(Adobe)/Supplement 0>><</Ascent 858/CIDSet 10757 0 R/CapHeight 719/Descent -148/Flags 4/FontBBox[-16 -148 1008 858]/FontFamily(\xfe\xff\x00H\x00Y\xc9\x11\xac\xe0\xb5\x15)/FontFile2 10758 0 R/FontName/YDRADB+H2gtrM/FontStretch/Normal/FontWeight 400/ItalicAngle 0/StemV 60/Type/FontDescriptor/XHeight 520>>
...
endstream
endobj
...
No. - Type
10732 - Pages
206 - StructTreeRoot
10771, 10772, 10773, 10778 ... - Font
Many indirect objects including 10732, 206, 10771 and 10772 do not exist in the pdf file.
But I think I found objects 10771~10808 in object 10739 stream.
Q1. Why are there no object 10732(Pages) and 206(StructTreeRoot) in the pdf file?
Q2. Why are indirect objects in stream?
I would be grateful if you would suggest any explanations or resources for reference.
Starting with version 1.5 PDF supports so called object streams, i.e. stream objects which contain other non-stream objects.
Your object 10739 is such an object stream as you can see in its Type ObjStm.
This allows those other objects to be compressed. In particular structure tree objects which otherwise can substantially increase the size of a PDF, can be compressed fairly well, reducing their impact on the document size.
For details please study the PDF specification, section 7.5.7 – Object Streams, in either the current PDF specification ISO 32000-2 or its predecessor ISO 32000-1.
Adobe has shared a copy of ISO 32000-1 on their web site which merely has its ISO page headers replaced. Simply google for "PDF32000_2008"; currently it is located at https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/PDF32000_2008.pdf but as far as I know this isn't a permalink.

Decode UDP message with LUA

I'm relatively new to lua and programming in general (self taught), so please be gentle!
Anyway, I wrote a lua script to read a UDP message from a game. The structure of the message is:
DATAxXXXXaaaaBBBBccccDDDDeeeeFFFFggggHHHH
DATAx = 4 letter ID and x = control character
XXXX = integer shows the group of the data (groups are known)
aaaa...HHHHH = 8 single-precision floating point numbers
The last ones is those numbers I need to decode.
If I print the message as received, it's something like:
DATA*{V???A?A?...etc.
Using string.byte(), I'm getting a stream of bytes like this (I have "formatted" the bytes to reflect the structure above.
68 65 84 65/42/20 0 0 0/237 222 28 66/189 59 182 65/107 42 41 65/33 173 79 63/0 0 128 63/146 41 41 65/0 0 30 66/0 0 184 65
The first 5 bytes are of course the DATA*. The next 4 are the 20th group of data. The next bytes, the ones I need to decode, and are equal to those values:
237 222 28 66 = 39.218
189 59 182 65 = 22.779
107 42 41 65 = 10.573
33 173 79 63 = 0.8114
0 0 128 63 = 1.0000
146 41 41 65 = 10.573
0 0 30 66 = 39.500
0 0 184 65 = 23.000
I've found C# code that does the decode with BitConverter.ToSingle(), but I haven't found any like this for Lua.
Any idea?
What Lua version do you have?
This code works in Lua 5.3
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
-- Read two float values starting from position 10 in the string
print(string.unpack("<ff", str, 10)) --> 39.217700958252 22.779169082642 18
-- 18 (third returned value) is the next position in the string
For Lua 5.1 you have to write special function (or steal it from François Perrad's git repo )
local function binary_to_float(str, pos)
local b1, b2, b3, b4 = str:byte(pos, pos+3)
local sign = b4 > 0x7F and -1 or 1
local expo = (b4 % 0x80) * 2 + math.floor(b3 / 0x80)
local mant = ((b3 % 0x80) * 0x100 + b2) * 0x100 + b1
local n
if mant + expo == 0 then
n = sign * 0.0
elseif expo == 0xFF then
n = (mant == 0 and sign or 0) / 0
else
n = sign * (1 + mant / 0x800000) * 2.0^(expo - 0x7F)
end
return n
end
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
print(binary_to_float(str, 10)) --> 39.217700958252
print(binary_to_float(str, 14)) --> 22.779169082642
It’s little-endian byte-order of IEEE-754 single-precision binary:
E.g., 0 0 128 63 is:
00111111 10000000 00000000 00000000
(63) (128) (0) (0)
Why that equals 1 requires that you understand the very basics of IEEE-754 representation, namely its use of an exponent and mantissa. See here to start.
See #Egor‘s answer above for how to use string.unpack() in Lua 5.3 and one possible implementation you could use in earlier versions.

Delphi Set Invalid Typecast

How do I fix this invalid typecast error? The code works when the set has less than 31 items. Below is the code snippet:
type
TOptionsSurveyTypes=(
ostLoadSurvey00,
ostLoadSurvey01,
ostLoadSurvey02,
ostLoadSurvey03,
ostLoadSurvey04,
ostLoadSurvey05,
ostLoadSurvey06,
ostLoadSurvey07,
ostLoadSurvey08,
ostLoadSurvey09,
ostLoadSurvey10,
ostEventLog01,
ostEventLog02,
ostEventLog03,
ostEventLog04,
ostEventLog05,
ostSagSwell,
ostTamper,
ostWaveforms,
ostDeviceList,
ostDeleteData,
ostTOUBillingTotal,
ostTOUPrevious,
ostProfileGenericLoadSurvey01,
ostProfileGenericLoadSurvey02,
ostProfileGenericLoadSurvey03,
ostProfileGenericLoadSurvey04,
ostProfileGenericLoadSurvey05,
ostProfileGenericLoadSurvey06,
ostProfileGenericLoadSurvey07,
ostProfileGenericLoadSurvey08,
ostProfileGenericLoadSurvey09,
ostProfileGenericLoadSurvey10,
ostProfileGenericEventLog01,
ostProfileGenericEventLog02,
ostProfileGenericEventLog03,
ostProfileGenericEventLog04,
ostProfileGenericEventLog05,
ostProfileGenericBillingTotal,
ostProfileGenericPrevious,
ostProfileGeneric
);
TOptionsSurveyTypesSet=set of TOptionsSurveyTypes;
function TUserProcessRollback.SurveyRollBack:boolean;
var
vRollbackDate: TDateTime;
FOptions: LongWord;
begin
...
if ostDeleteData in TOptionsSurveyTypesSet(FOptions) then <-- invalid typecast error here
vRollbackDate := 0.00
else
vRollbackDate := FRollbackDate;
...
end;
When I reduce the set to just less than 32 items and FOptions is declared as DWORD, the code compiles .
Thanks
Your enumerated type has 41 items. Each byte holds 8 bits. To have a set of this enumerated type requires at least 41 bits. The smallest number of bytes necessary to hold 41 bits is 6. So the set type is 6 bytes. To confirm this, you can execute this:
ShowMessage ( inttostr ( sizeof ( TOptionsSurveyTypesSet ) ) );
A DWORD is 4 bytes, so it cannot be typecast into a type that is 6 bytes. If you declare fOptions to be a type with 6 bytes, your code will compile.
FOptions: packed array [ 1 .. 6] of byte;
If you reduce the enumerated type to 32 or fewer items, then the set type will be 4 bytes, and so the typecast from DWORD will work.

Golang append memory allocation VS. STL push_back memory allocation

I compared the Go append function and the STL vector.push_back and found that different memory allocation strategy which confused me. The code is as follow:
// CPP STL code
void getAlloc() {
vector<double> arr;
int s = 9999999;
int precap = arr.capacity();
for (int i=0; i<s; i++) {
if (precap < i) {
arr.push_back(rand() % 12580 * 1.0);
precap = arr.capacity();
printf("%d %p\n", precap, &arr[0]);
} else {
arr.push_back(rand() % 12580 * 1.0);
}
}
printf("\n");
return;
}
// Golang code
func getAlloc() {
arr := []float64{}
size := 9999999
pre := cap(arr)
for i:=0; i<size; i++ {
if pre < i {
arr = append(arr, rand.NormFloat64())
pre = cap(arr)
log.Printf("%d %p\n", pre, &arr)
} else {
arr = append(arr, rand.NormFloat64())
}
}
return;
}
But the memory address is invarient to the increment of size expanding, this really confused me.
By the way, the memory allocation strategy is different in this two implemetation (STL VS. Go), I mean the expanding size. Is there any advantage or disadvantage? Here is the simplified output of code above[size and first element address]:
Golang CPP STL
2 0xc0800386c0 2 004B19C0
4 0xc0800386c0 4 004AE9B8
8 0xc0800386c0 6 004B29E0
16 0xc0800386c0 9 004B2A18
32 0xc0800386c0 13 004B2A68
64 0xc0800386c0 19 004B2AD8
128 0xc0800386c0 28 004B29E0
256 0xc0800386c0 42 004B2AC8
512 0xc0800386c0 63 004B2C20
1024 0xc0800386c0 94 004B2E20
1280 0xc0800386c0 141 004B3118
1600 0xc0800386c0 211 004B29E0
2000 0xc0800386c0 316 004B3080
2500 0xc0800386c0 474 004B3A68
3125 0xc0800386c0 711 004B5FD0
3906 0xc0800386c0 1066 004B7610
4882 0xc0800386c0 1599 004B9768
6102 0xc0800386c0 2398 004BC968
7627 0xc0800386c0 3597 004C1460
9533 0xc0800386c0 5395 004B5FD0
11916 0xc0800386c0 8092 004C0870
14895 0xc0800386c0 12138 004D0558
18618 0xc0800386c0 18207 004E80B0
23272 0xc0800386c0 27310 0050B9B0
29090 0xc0800386c0 40965 004B5FD0
36362 0xc0800386c0 61447 00590048
45452 0xc0800386c0 92170 003B0020
56815 0xc0800386c0 138255 00690020
71018 0xc0800386c0 207382 007A0020
....
UPDATE:
See comments for Golang memory allocation strategy.
For STL, the strategy depends on the implementation. See this post for further information.
Your Go and C++ code fragments are not equivalent. In the C++ function, you are printing the address of the first element in the vector, while in the Go example you are printing the address of the slice itself.
Like a C++ std::vector, a Go slice is a small data type that holds a pointer to an underlying array that holds the data. That data structure has the same address throughout the function. If you want the address of the first element in the slice, you can use the same syntax as in C++: &arr[0].
You're getting the pointer to the slice header, not the actual backing array. You can think of the slice header as a struct like
type SliceHeader struct {
len,cap int
backingArray unsafe.Pointer
}
When you append and the backing array is reallocated, the pointer backingArray will likely be changed (not necessarily, but probably). However, the location of the struct holding the length, cap, and pointer to the backing array doesn't change -- it's still on the stack right where you declared it. Try printing &arr[0] instead of &arr and you should see behavior closer to what you expect.
This is pretty much the same behavior as std::vector, incidentally. Think of a slice as closer to a vector than a magic dynamic array.

Resources