How do I fix this invalid typecast error? The code works when the set has less than 31 items. Below is the code snippet:
type
TOptionsSurveyTypes=(
ostLoadSurvey00,
ostLoadSurvey01,
ostLoadSurvey02,
ostLoadSurvey03,
ostLoadSurvey04,
ostLoadSurvey05,
ostLoadSurvey06,
ostLoadSurvey07,
ostLoadSurvey08,
ostLoadSurvey09,
ostLoadSurvey10,
ostEventLog01,
ostEventLog02,
ostEventLog03,
ostEventLog04,
ostEventLog05,
ostSagSwell,
ostTamper,
ostWaveforms,
ostDeviceList,
ostDeleteData,
ostTOUBillingTotal,
ostTOUPrevious,
ostProfileGenericLoadSurvey01,
ostProfileGenericLoadSurvey02,
ostProfileGenericLoadSurvey03,
ostProfileGenericLoadSurvey04,
ostProfileGenericLoadSurvey05,
ostProfileGenericLoadSurvey06,
ostProfileGenericLoadSurvey07,
ostProfileGenericLoadSurvey08,
ostProfileGenericLoadSurvey09,
ostProfileGenericLoadSurvey10,
ostProfileGenericEventLog01,
ostProfileGenericEventLog02,
ostProfileGenericEventLog03,
ostProfileGenericEventLog04,
ostProfileGenericEventLog05,
ostProfileGenericBillingTotal,
ostProfileGenericPrevious,
ostProfileGeneric
);
TOptionsSurveyTypesSet=set of TOptionsSurveyTypes;
function TUserProcessRollback.SurveyRollBack:boolean;
var
vRollbackDate: TDateTime;
FOptions: LongWord;
begin
...
if ostDeleteData in TOptionsSurveyTypesSet(FOptions) then <-- invalid typecast error here
vRollbackDate := 0.00
else
vRollbackDate := FRollbackDate;
...
end;
When I reduce the set to just less than 32 items and FOptions is declared as DWORD, the code compiles .
Thanks
Your enumerated type has 41 items. Each byte holds 8 bits. To have a set of this enumerated type requires at least 41 bits. The smallest number of bytes necessary to hold 41 bits is 6. So the set type is 6 bytes. To confirm this, you can execute this:
ShowMessage ( inttostr ( sizeof ( TOptionsSurveyTypesSet ) ) );
A DWORD is 4 bytes, so it cannot be typecast into a type that is 6 bytes. If you declare fOptions to be a type with 6 bytes, your code will compile.
FOptions: packed array [ 1 .. 6] of byte;
If you reduce the enumerated type to 32 or fewer items, then the set type will be 4 bytes, and so the typecast from DWORD will work.
Related
When I use £ symbol in password according with an AES encryptor I get the error
Key length must be 128/192/256 bits
String pass = 'my_cool_password_£..............';
var key = Key.fromUtf8(pass);
var encrypter = Encrypter(AES(key));
encrypter.encrypt(plainText, iv: iv); // error `Key length must be 128/192/256 bits`
Stack trace
Unhandled exception:
Invalid argument(s): Key length must be 128/192/256 bits
#0 AESFastEngine.init (package:pointycastle/block/aes_fast.dart:66:7)
#1 SICStreamCipher.init (package:pointycastle/stream/sic.dart:55:22)
#2 StreamCipherAsBlockCipher.init (package:pointycastle/adapters/stream_cipher_as_block_cipher.dart:27:18)
#3 PaddedBlockCipherImpl.init (package:pointycastle/padded_block_cipher/padded_block_cipher_impl.dart:43:12)
#4 AES.encrypt (package:encrypt/src/algorithms/aes.dart:19:9)
#5 Encrypter.encryptBytes (package:encrypt/src/encrypter.dart:12:19)
#6 Encrypter.encrypt (package:encrypt/src/encrypter.dart:20:12)
the package was used https://pub.dev/packages/encrypt
here is the package encrypt function
Encrypted encrypt(String input, {IV iv}) {
return encryptBytes(convert.utf8.encode(input), iv: iv);
}
Since you are using UTF-8 to represent your password, you need to take into account that not all letters can be represented with only 1 byte (8 bits).
E.g. the £ is represented by using two bytes (16 bits): c2 a3
This can be seen in the following example:
import 'dart:convert';
void main() {
print(utf8.encode('my_cool_password_£..............').length * 8); // 264
print(utf8.encode('my_cool_password_x..............').length * 8); // 256
print(utf8.encode('£').length * 8); // 16
print(utf8.encode('£').map((i) => i.toRadixString(16))); // (c2, a3)
}
I'm writing an simulator (for learning purposes) for complete M3UA-SCCP-TCAP-MAP stack (over SCTP). So far M3UA+SCCP stacks are OK.
M3UA Based on the RFC 4666 Sept 2006
SCCP Based on the ITU-T Q.711-Q716
TCAP Based on the ITU-T Q.771-Q775
But upon decoding TCAP part I got lost on dialoguePortion.
TCAP is asn.1 encoded, so everything is tag+len+data.
Wireshark decode it differently than my decoder.
Message is:
62434804102f00676b1e281c060700118605010101a011600f80020780a1090607040000010005036c1ba1190201010201163011800590896734f283010086059062859107
Basically, my message is BER-decoded as
Note: Format: hex(tag) + (BER splitted to CLS+PC+TAG in decimal) + hex(data)
62 ( 64 32 2 )
48 ( 64 0 8 ) 102f0067
6b ( 64 32 11 )
28 ( 0 32 8 )
06 ( 0 0 6 ) 00118605010101 OID=0.0.17.773.1.1.1
a0 ( 128 32 0 )
60 ( 64 32 0 )
80 ( 128 0 0 ) 0780
a1 ( 128 32 1 )
06 ( 0 0 6 ) 04000001000503 OID=0.4.0.0.1.0.5.3
6c ( 64 32 12 )
...
So I can see begin[2] message containing otid[8], dialogPortion[11] and componentPortion[12].
otid and ComponentPortion are decoded correctly. But not dialogPortion.
ASN for dialogPortion does not mention any of these codes.
Even more confusing, wireshark decode it differently (oid-as-dialogue is NOT in the dialoguePortion, but as a field after otid, which is NOT as described in ITU-T documentation - or not as I'm understanding it)
Wireshark decoded Transaction Capabilities Application Part
begin
Source Transaction ID
otid: 102f0067
oid: 0.0.17.773.1.1.1 (id-as-dialogue)
dialogueRequest
Padding: 7
protocol-version: 80 (version1)
1... .... = version1: True
application-context-name: 0.4.0.0.1.0.5.3 (locationInfoRetrievalContext-v3)
components: 1 item
...
I can't find any reference for Padding in dialoguePDU ASN.
Can someone point me in the right direction?
I would like to know how to properly decode this message
DialoguePDU format should be simple in this case:
dialogue-as-id OBJECT IDENTIFIER ::= {itu-t recommendation q 773 as(1) dialogue-as(1) version1(1)}
DialoguePDU ::= CHOICE {
dialogueRequest AARQ-apdu,
dialogueResponse AARE-apdu,
dialogueAbort ABRT-apdu
}
AARQ-apdu ::= [APPLICATION 0] IMPLICIT SEQUENCE {
protocol-version [0] IMPLICIT BIT STRING {version1(0)} DEFAULT {version1},
application-context-name [1] OBJECT IDENTIFIER,
user-information [30] IMPLICIT SEQUENCE OF EXTERNAL OPTIONAL
}
Wireshark is still wrong :-). But then... that is display. It displays values correctly - only in the wrong section. Probably some reason due to easier decoding.
What I was missing was definition of EXTERNAL[8]. DialoguePortion is declared as EXTERNAL...so now everything makes sense.
For your message, my very own decoder says:
begin [APPLICATION 2] (x67)
otid [APPLICATION 8] (x4) =102f0067h
dialoguePortion [APPLICATION 11] (x30)
EXTERNAL (x28)
direct-reference [OBJECT IDENTIFIER] (x7) =00118605010101h
encoding:single-ASN1-type [0] (x17)
dialogueRequest [APPLICATION 0] (x15)
protocol-version [0] (x2) = 80 {version1 (0) } spare bits= 7
application-context-name [1] (x9)
OBJECT IDENTIFIER (x7) =04000001000503h
components [APPLICATION 12] (x27)
invoke [1] (x25)
invokeID [INTEGER] (x1) =1d (01h)
operationCode [INTEGER] (x1) = (22) SendRoutingInfo
parameter [SEQUENCE] (x17)
msisdn [0] (x5) = 90896734f2h
Nature of Address: international number (1)
Numbering Plan Indicator: unknown (0)
signal= 9876432
interrogationType [3] (x1) = (0) basicCall
gmsc-Address [6] (x5) = 9062859107h
Nature of Address: international number (1)
Numbering Plan Indicator: unknown (0)
signal= 26581970
Now, wireshark's padding 7 and my spare bits=7 both refer to the the protocol-version field, defined in Q.773 as:
AARQ-apdu ::= [APPLICATION 0] IMPLICIT SEQUENCE {
protocol-version [0] IMPLICIT BIT STRING { version1 (0) }
DEFAULT { version1 },
application-context-name [1] OBJECT IDENTIFIER,
user-information [30] IMPLICIT SEQUENCE OF EXTERNAL
OPTIONAL }
the BIT STRING definition, assigns name to just the leading bit (version1)... the rest (7 bits) are not given a name and wireshark consider them as padding
I compared the Go append function and the STL vector.push_back and found that different memory allocation strategy which confused me. The code is as follow:
// CPP STL code
void getAlloc() {
vector<double> arr;
int s = 9999999;
int precap = arr.capacity();
for (int i=0; i<s; i++) {
if (precap < i) {
arr.push_back(rand() % 12580 * 1.0);
precap = arr.capacity();
printf("%d %p\n", precap, &arr[0]);
} else {
arr.push_back(rand() % 12580 * 1.0);
}
}
printf("\n");
return;
}
// Golang code
func getAlloc() {
arr := []float64{}
size := 9999999
pre := cap(arr)
for i:=0; i<size; i++ {
if pre < i {
arr = append(arr, rand.NormFloat64())
pre = cap(arr)
log.Printf("%d %p\n", pre, &arr)
} else {
arr = append(arr, rand.NormFloat64())
}
}
return;
}
But the memory address is invarient to the increment of size expanding, this really confused me.
By the way, the memory allocation strategy is different in this two implemetation (STL VS. Go), I mean the expanding size. Is there any advantage or disadvantage? Here is the simplified output of code above[size and first element address]:
Golang CPP STL
2 0xc0800386c0 2 004B19C0
4 0xc0800386c0 4 004AE9B8
8 0xc0800386c0 6 004B29E0
16 0xc0800386c0 9 004B2A18
32 0xc0800386c0 13 004B2A68
64 0xc0800386c0 19 004B2AD8
128 0xc0800386c0 28 004B29E0
256 0xc0800386c0 42 004B2AC8
512 0xc0800386c0 63 004B2C20
1024 0xc0800386c0 94 004B2E20
1280 0xc0800386c0 141 004B3118
1600 0xc0800386c0 211 004B29E0
2000 0xc0800386c0 316 004B3080
2500 0xc0800386c0 474 004B3A68
3125 0xc0800386c0 711 004B5FD0
3906 0xc0800386c0 1066 004B7610
4882 0xc0800386c0 1599 004B9768
6102 0xc0800386c0 2398 004BC968
7627 0xc0800386c0 3597 004C1460
9533 0xc0800386c0 5395 004B5FD0
11916 0xc0800386c0 8092 004C0870
14895 0xc0800386c0 12138 004D0558
18618 0xc0800386c0 18207 004E80B0
23272 0xc0800386c0 27310 0050B9B0
29090 0xc0800386c0 40965 004B5FD0
36362 0xc0800386c0 61447 00590048
45452 0xc0800386c0 92170 003B0020
56815 0xc0800386c0 138255 00690020
71018 0xc0800386c0 207382 007A0020
....
UPDATE:
See comments for Golang memory allocation strategy.
For STL, the strategy depends on the implementation. See this post for further information.
Your Go and C++ code fragments are not equivalent. In the C++ function, you are printing the address of the first element in the vector, while in the Go example you are printing the address of the slice itself.
Like a C++ std::vector, a Go slice is a small data type that holds a pointer to an underlying array that holds the data. That data structure has the same address throughout the function. If you want the address of the first element in the slice, you can use the same syntax as in C++: &arr[0].
You're getting the pointer to the slice header, not the actual backing array. You can think of the slice header as a struct like
type SliceHeader struct {
len,cap int
backingArray unsafe.Pointer
}
When you append and the backing array is reallocated, the pointer backingArray will likely be changed (not necessarily, but probably). However, the location of the struct holding the length, cap, and pointer to the backing array doesn't change -- it's still on the stack right where you declared it. Try printing &arr[0] instead of &arr and you should see behavior closer to what you expect.
This is pretty much the same behavior as std::vector, incidentally. Think of a slice as closer to a vector than a magic dynamic array.
I'm working with the variant record below. The variable instance is Kro_Combi. SizeOf(Kro_Combi) reports 7812 bytes. SizeOf(Kro_Combi.data) reports 7810 bytes.
The sum of the SizeOf of all the other data structures composing the "non-directmode" case of the variant record also adds to 7810 bytes.
Why is there a two byte difference? I would like to have the two variant exactly overlay each other.
TKro_Combi = record
case directmode:boolean of
true : (
data : array[0..7809] of byte
);
false : (
Combi_Name : array[0..23] of char; //24
Gap1 : array[0..63] of byte; // 24-87 (64)
Ins_Effect_Group : array[1..12] of TIns_Effect_Params; //74 each, (Ins_Effect_Data=9 bytes) 74*12 = 888
Mast_Effect_Params : array[0..229] of byte; // 976-1205 : 230 bytes
Vect_Aud__Drum_Params : array[0..97] of byte; //1206-1303 : 98 bytes
Karma_Common : array[0..509] of byte; //1304-1813 : 510 bytes
Karma_Module : array[0..3] of TKarma_Module; //1814-2557 : 744 bytes each Total span 1814 - 4789 = 2976 bytes total
Common_Params : array[0..11] of byte; //4790-4801 = 12 bytes
Timbre_Group : array[1..16] of TTimbre_Params; ) // 4802 -4989 = 188 bytes each, 16 Timbres, 4802-7809 = 3008 bytes total for all
end;
First of all, there needs to be space for the directmode field. If you really want the record to have size 7810 bytes then you should remove that field. The other byte will be due to internal alignment and padding of the false part of the variant record. I can't yet quite work out where it comes from. No matter, you simply want to use a packed record to avoid any padding bytes.
TKro_Combi = packed record
case boolean of
true : (
data : array[0..7809] of byte
);
false : (
Combi_Name : array[0..23] of char; //24
Gap1 : array[0..63] of byte; // 24-87 (64)
Ins_Effect_Group : array[1..12] of TIns_Effect_Params; //74 each, (Ins_Effect_Data=9 bytes) 74*12 = 888
Mast_Effect_Params : array[0..229] of byte; // 976-1205 : 230 bytes
Vect_Aud__Drum_Params : array[0..97] of byte; //1206-1303 : 98 bytes
Karma_Common : array[0..509] of byte; //1304-1813 : 510 bytes
Karma_Module : array[0..3] of TKarma_Module; //1814-2557 : 744 bytes each Total span 1814 - 4789 = 2976 bytes total
Common_Params : array[0..11] of byte; //4790-4801 = 12 bytes
Timbre_Group : array[1..16] of TTimbre_Params; ) // 4802 -4989 = 188 bytes each, 16 Timbres, 4802-7809 = 3008 bytes total for all
end;
How do I change desktop wallpaper?
I tried this
procedure TForm1.Button1Click(Sender: TObject);
var
PicPath: String;
begin
PicPath := 'C:\test.bmp';
SystemParametersInfo(SPI_SETDESKWALLPAPER, 0, pChar(PicPath), SPIF_SENDCHANGE)
end;
But it didn't work.
I just tried it with D2007 on XP (and also D2009 on Vista), and this code works.
But to catch If and why it is not working, you should test the result code and get the error from Windows:
if not SystemParametersInfo(SPI_SETDESKWALLPAPER, 0, pChar(PicPath), SPIF_SENDCHANGE)then
RaiseLastOSError;
In most cases, it will be because the bmp file is not found:
System Error. Code: 2.
The system cannot find the file specified.
You can check out this python script:
http://gaze.svn.sourceforge.net/viewvc/gaze/trunk/implementation/src/gazelib/os_interface.py?view=markup
This is the python method that does all the magic. It changes a few registry keys and then calls a system method to update the wallpaper.
103 def set_wallpaper(self, file_path) :
104 self.__lock.acquire()
105 # this module is part of python 2.5 by default
106 import ctypes
107 import _winreg
108 reg = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, self.__REGISTRY_PATH, 0, _winreg.KEY_SET_VALUE)
109 # First center the image and turn off tiling
110 _winreg.SetValueEx(reg, "TileWallpaper", 0, _winreg.REG_SZ, "0")
111 _winreg.SetValueEx(reg, "WallpaperStyle", 0, _winreg.REG_SZ, "0")
112 # Set the image
113 _winreg.SetValueEx(reg, "ConvertedWallpaper", 0, _winreg.REG_SZ, os.path.realpath(file_path))
114 _winreg.SetValueEx(reg, "Wallpaper", 0, _winreg.REG_SZ, self.convert_to_bmp(file_path))
115 _winreg.CloseKey(reg)
116 # Notify the changes to the system
117 func_ret_val = ctypes.windll.user32.SystemParametersInfoA(\
118 self.__SPI_SETDESKWALLPAPER,\
119 0,\
120 None,\
121 self.__SPIF_UPDATEINIFILE | self.__SPIF_SENDWININICHANGE)
122 assert func_ret_val == 1
123 self.__lock.release()
Check a VB code here, it can give you a clue.
SystemParametersInfo(SPI_SETDESKWALLPAPER, 0, imageLocation, SPIF_UPDATEINIFILE Or SPIF_SENDWININICHANGE)
This should work
Procedure TForm1.Button1Click(Sender: TObject);
var
PicPath : string;
begin
PicPath := 'C:\test.bmp';
SystemParametersInfo(SPI_SETDESKWALLPAPER, 0, Pointer(PicPath), SPIF_SENDWININICHANGE);
end;