VS2019 Intellisense Says C26451 on a +1 operation? - visual-studio-2019

VS2019 Intellisense says:
Arithmetic overflow: Using operator '+' on a 4 byte value and then
casting the result to a 8 byte value. Cast the value to the wider type
before calling operator '+' to avoid overflow.
Here's the code:
void AddSortedNoCase(LPCTSTR str) {
INT_PTR nMin = 0;
INT_PTR nMax = GetUpperBound();
while (nMin <= nMax) {
INT_PTR nHit = (nMin + nMax) / 2;
int cmp = _tcsicmp(str, GetAt(nHit));
if (cmp > 0)
nMin = nHit + 1; // <<<<<<<< C26451
else if (cmp < 0)
nMax = nHit - 1;
else
return; // already in the list
}
InsertAt(nMin, str);
}
I don't see a problem with the +1 since that should be auto type cast to INT_PTR.
The compiler doesn't complain, just intellisense on that one line. Perhaps it's a bug or maybe something I don't know about with C++20?

Related

is a Fortran subroutine with a dummy argument specified size array thread safe

The following code compiles in gfortran, with a warning about large_array being larger than the limit for a stack variable, stating that the array will be moved to static memory and is therefore not threadsafe:
subroutine stack_size_warning
implicit none
real :: large_array(65536)
print *, large_array
end subroutine stack_size_warning
This subroutine however compiles with no errors or warnings, and I can call it with n values larger than 65536 without issue, at least in simple cases.
subroutine no_warning(n)
implicit none
integer :: n
real :: automatic_array(n)
print *, automatic_array
end subroutine no_warning
Is this second array threadsafe? Where is the memory allocated for automatic_array in this second subroutine? Is the memory allocated and deallocated on every call making it slower than if it was on the stack or if a preallocated array was passed in as a dummy argument?
I wrote the following program to test 3 scenarios, a subroutine with a small array on the stack, another with a large array over the stack limit and thus stored in static memory, and a third where a dummy argument specifies the size of an array defined inside the routine.
Here is that program:
program main
implicit none
call small
call large
call automatic(65536)
end program main
subroutine small
implicit none
real :: small_array(10)
small_array=1.
print *, small_array
end subroutine small
subroutine large
implicit none
real :: large_array(65536)
large_array=1.
print *, large_array
end subroutine large
subroutine automatic(n)
implicit none
integer :: n
real :: automatic_array(n)
automatic_array=1.
print *, automatic_array
end subroutine automatic
Using steve's recommendation I compiled with a tree dump as follows:
gfortran array_dim_test.f90 -o array_dim_test -fdump-tree-original
The full dump is at the end, but to summarize what I see, the automatic subroutine has a try/finally block. In the try block, a call to malloc allocates the memory, and in the finally block, the memory is freed. So I guess this memory is allocated and deallocated on the heap with every call to the subroutine. This intuitively makes sense as how else would the program know what to do with this array that lives only in the subroutine, and whose size is defined in a call to the subroutine, but it is interesting to see the explicit calls in the tree dump. This would appear to be thread-safe then, but perhaps also not the most efficient thing to do if this routine is called many times with the same array size parameter, allocating and deallocating memory with every call.
Here is the tree dump:
__attribute__((fn spec (". w ")))
void automatic (integer(kind=4) & restrict n)
{
void * restrict D.3964;
integer(kind=8) ubound.0;
integer(kind=8) size.1;
real(kind=4)[0:D.3961] * restrict automatic_array;
integer(kind=8) D.3961;
bitsizetype D.3962;
sizetype D.3963;
try
{
ubound.0 = (integer(kind=8)) *n;
size.1 = NON_LVALUE_EXPR <ubound.0>;
size.1 = MAX_EXPR <size.1, 0>;
D.3961 = size.1 + -1;
D.3962 = (bitsizetype) (sizetype) NON_LVALUE_EXPR <size.1> * 32;
D.3963 = (sizetype) NON_LVALUE_EXPR <size.1> * 4;
D.3964 = (void * restrict) __builtin_malloc (MAX_EXPR <(unsigned long) (size.1 * 4), 1>);
automatic_array = (real(kind=4)[0:D.3961] * restrict) D.3964;
{
integer(kind=8) D.3940;
D.3940 = ubound.0;
{
integer(kind=8) S.2;
S.2 = 1;
while (1)
{
if (S.2 > D.3940) goto L.1;
(*automatic_array)[S.2 + -1] = 1.0e+0;
S.2 = S.2 + 1;
}
L.1:;
}
}
{
struct __st_parameter_dt dt_parm.3;
dt_parm.3.common.filename = &"array_dim_test.f90"[1]{lb: 1 sz: 1};
dt_parm.3.common.line = 27;
dt_parm.3.common.flags = 128;
dt_parm.3.common.unit = 6;
_gfortran_st_write (&dt_parm.3);
{
integer(kind=8) D.3944;
struct array01_real(kind=4) parm.4;
D.3944 = ubound.0;
parm.4.span = 4;
parm.4.dtype = {.elem_len=4, .rank=1, .type=3};
parm.4.dim[0].lbound = 1;
parm.4.dim[0].ubound = D.3944;
parm.4.dim[0].stride = 1;
parm.4.data = (void *) &(*automatic_array)[0];
parm.4.offset = -1;
_gfortran_transfer_array_write (&dt_parm.3, &parm.4, 4, 0);
}
_gfortran_st_write_done (&dt_parm.3);
}
}
finally
{
__builtin_free ((void *) automatic_array);
}
}
__attribute__((fn spec (". ")))
void large ()
{
static real(kind=4) large_array[65536];
{
integer(kind=8) S.5;
S.5 = 1;
while (1)
{
if (S.5 > 65536) goto L.2;
large_array[S.5 + -1] = 1.0e+0;
S.5 = S.5 + 1;
}
L.2:;
}
{
struct __st_parameter_dt dt_parm.6;
dt_parm.6.common.filename = &"array_dim_test.f90"[1]{lb: 1 sz: 1};
dt_parm.6.common.line = 19;
dt_parm.6.common.flags = 128;
dt_parm.6.common.unit = 6;
_gfortran_st_write (&dt_parm.6);
{
struct array01_real(kind=4) parm.7;
parm.7.span = 4;
parm.7.dtype = {.elem_len=4, .rank=1, .type=3};
parm.7.dim[0].lbound = 1;
parm.7.dim[0].ubound = 65536;
parm.7.dim[0].stride = 1;
parm.7.data = (void *) &large_array[0];
parm.7.offset = -1;
_gfortran_transfer_array_write (&dt_parm.6, &parm.7, 4, 0);
}
_gfortran_st_write_done (&dt_parm.6);
}
}
__attribute__((fn spec (". ")))
void small ()
{
real(kind=4) small_array[10];
{
integer(kind=8) S.8;
S.8 = 1;
while (1)
{
if (S.8 > 10) goto L.3;
small_array[S.8 + -1] = 1.0e+0;
S.8 = S.8 + 1;
}
L.3:;
}
{
struct __st_parameter_dt dt_parm.9;
dt_parm.9.common.filename = &"array_dim_test.f90"[1]{lb: 1 sz: 1};
dt_parm.9.common.line = 12;
dt_parm.9.common.flags = 128;
dt_parm.9.common.unit = 6;
_gfortran_st_write (&dt_parm.9);
{
struct array01_real(kind=4) parm.10;
parm.10.span = 4;
parm.10.dtype = {.elem_len=4, .rank=1, .type=3};
parm.10.dim[0].lbound = 1;
parm.10.dim[0].ubound = 10;
parm.10.dim[0].stride = 1;
parm.10.data = (void *) &small_array[0];
parm.10.offset = -1;
_gfortran_transfer_array_write (&dt_parm.9, &parm.10, 4, 0);
}
_gfortran_st_write_done (&dt_parm.9);
}
}
__attribute__((fn spec (". ")))
void MAIN__ ()
{
small ();
large ();
{
static integer(kind=4) C.3993 = 65536;
automatic (&C.3993);
}
}
__attribute__((externally_visible))
integer(kind=4) main (integer(kind=4) argc, character(kind=1) * * argv)
{
static integer(kind=4) options.11[7] = {2116, 4095, 0, 1, 1, 0, 31};
_gfortran_set_args (argc, argv);
_gfortran_set_options (7, &options.11[0]);
MAIN__ ();
return 0;
}

C to Lua conversion - weird result

I have a C function that I want to convert to LUA but I'm getting strange results out of Lua:
unsigned short crc16(const char* pstrCurrent, int iCount)
{
unsigned short wCRC = 0;
int iIndex = 0;
while(--iCount >= 0)
{
wCRC = wCRC ^ ((int)(*pstrCurrent++) << 8);
printf ("WCRC = %u\n", wCRC);
}
return (wCRC & 0xFFFF);
}
and here is how I started the Lua:
local function crc16(keyCurrent, byteCount)
wCRC = 0
byteIndex = 1
local crcInput = {}
while byteCount > 0 do
print ("BYTE COUNT= " .. byteCount)
wCRC=bit32.bxor(wCRC, bit32.lshift(keyCurrent[byteIndex], 8))
print ( "WCRC = " .. wCRC )
byteCount = byteCount-1
byteIndex = byteIndex+1
end
end
Yes, I know the C function is incomplete, I just want to compare what's causing issues.
The prints of the WCRC is C and Lua print completely different numbers for the same input.
Is my Lua conversion incorrect? It is my second or third time using Lua so not quite sure what I'm doing wrong.
***************** UPDATE ********************
So here is the full C and LUA and a quick little test code:
unsigned short crc16(const char* pstrCurrent, int iCount)
{
unsigned short wCRC = 0;
int iIndex = 0;
// Perform the following for each character in the buffer
while(--iCount >= 0)
{
// Get the byte information for the calculation and
// advance the pointer
wCRC = wCRC ^ ((int)(*pstrCurrent++) << 8);
for(iIndex = 0; iIndex < 8; ++iIndex)
{
if(wCRC & 0x8000)
{
wCRC = (wCRC << 1) ^ 0x1021;
}
else
{
wCRC = wCRC << 1;
}
}
}
return (wCRC & 0xFFFF);
}
and the LUA conversion:
function crc16 (keyCurrent, iCount)
wCRC = 0
byteIndex = 1
iIndex = 0
local crcInput = {}
while iCount >= 1 do
wCRC = bit32.bxor (wCRC, bit32.lshift(keyCurrent[byteIndex], 8))
for iIndex=0,8 do
if (bit32.band (wCRC, 0x8000) ~= nil ) then
wCRC = bit32.bxor (bit32.lshift (wCRC, 1), 0x1021)
else
wCRC = bit32.lshift (wCRC, 1)
end
end
iCount = iCount-1
byteIndex = byteIndex+1
end
return (bit32.band (wCRC, 0xFFFF))
end
local dKey = {}
dKey = {8, 210, 59, 0, 18, 166, 254, 117}
print ( "CRC = " .. crc16 (dKey ,8) )
In C, for the same array I get: CRC16 = 567
In LUA, I get: CRC = 61471
Can someone tell me what I'm doing wrong?
Thanks
It seems they yield the same results:
pure-C
WCRC = 18432
WCRC = 11520
WCRC = 16640
WCRC = 11520
pure-Lua
BYTE COUNT= 4
WCRC = 18432
BYTE COUNT= 3
WCRC = 11520
BYTE COUNT= 2
WCRC = 16640
BYTE COUNT= 1
WCRC = 11520
ASCII convertor:
What do you mean?
There's mistakes in altered Lua sample.
1. bit32.band() returns number. Number 0 not equals to 'nil', that's totally different type. You're trying to compare number with nil, and that check will fail always.2. for iIndex=0,8 do iterates 9 times, including final index 8.

not correct num histgram

Im trying to make a toString method that prints out a histogram that shows how often each character of the alphabet is used in a string. The most frequent character has to be 60 #s long, with the rest of the characters then scaled to match.
My issue is with making the equation that scales the rest of the letters to the correct length for the histogram. My current equation is (myArray[i]/max) * 60, but im getting really weird results.
If I put in "hello world" to be analyzed, L would be the most common occuring letter, seen 3 times. So L should have 60 #s for the histogram, h should have 20, o should have 40 etc. Instead im getting results like d : 10
e : 10
h : 10
l : 360
o : 20
r : 10
w : 10
Sorry for how sloppy this is right now, im just trying to figure out whats going on
public class LetterCounter
private static int[] alphabetArray;
private static String input;
/**
* Constructor for objects of class LetterCounter
*/
public LetterCounter()
{
alphabetArray = new int[26];
}
public void countLetters(String input) {
this.input = input;
this.input.toLowerCase();
//String s= input;
//s.toLowerCase();
for ( int i = 0; i < input.length(); i++ ) {
char ch= input.charAt(i);
if (ch >= 97 && ch <= 122){
alphabetArray[ch-'a']++;
}
}
}
public void getTotalCount() {
for (int i = 0; i < alphabetArray.length; i++) {
if(alphabetArray[i]>=0){
char ch = (char) (i+97);
System.out.println(ch +" : "+alphabetArray[i]);
}
}
}
public void reset() {
for (int i =0; i<alphabetArray.length; i++) {
if(alphabetArray[i]>=0){
alphabetArray[i]=0;
char ch = (char) (i+97);
System.out.println(ch +" : "+alphabetArray[i]);
}
}
}
public String toString() {
String s = "";
int max = alphabetArray[0];
int markCounter = 0;
for(int i =0; i<alphabetArray.length; i++) {
//finds the largest number of occurences for any letter in the string
if(alphabetArray[i] > max) {
max = alphabetArray[i];
}
}
for(int i =0; i<alphabetArray.length; i++) {
//trying to scale the rest of the characters down here
if(alphabetArray[i] > 0) {
markCounter = (alphabetArray[i] / max) * 60;
char ch = (char) (i+97);
System.out.println(ch +" : "+alphabetArray[i] + markCounter);
}
}
for (int i = 0; i < alphabetArray.length; i++) {
//prints the whole alphabet, total number of occurences for all chars
if(alphabetArray[i]>=0){
char ch = (char) (i+97);
System.out.println(ch +" : "+alphabetArray[i]);
}
}
return s;
}
}
There are many many problems with your code, but lets go one by one.
First of all, your print statement is simply misleading. Change it to
System.out.println(ch +" : "+alphabetArray[i] + " " + markCounter);
and you will see
d : 1 0
e : 1 0
h : 1 0
l : 3 60
o : 2 0
r : 1 0
w : 1 0
As you can see: the counters are correct (1,1,1,3,2,1,1). But the your scaling doesn't work:
1 / 3 --> 0 ... and 0 * 3 ... is still 0
3 / 3 --> 1 and 1 * 3 ... is 60
but of course, when you dont print a space between 1 and 0 and 3 and 60.
Thus to get correct scaling, just change to:
markCounter = alphabetArray[i] * 60 / max;
Other things worth mentioning:
You are overriding toString(). Then you should put #Override in fron t of that method
toLowerCase() returns a new string in lower case; just calling it without pushing the result back into your string ... just throws away the "lower casing".
toString() shouldnt print to the console. The whole idea is that you put all the information into the string that you return. In other words: in the end you do some System.out.println(someLetterCounter.toString()
Your code is extremely low-level. You don't iterate arrays using for (int), you can do (int letter : alphabetArray) instead
You might want to read about Map. You see, if you would be using a Map<Character, Integer> where the map key would represent the different characters, and the map value represents a counter for each character ... well, you could throw out most of your code; and come up with a solution that would require a few lines of code only!
( and seriously: because of all these issues, debugging your code was really much harder than it needed to be )
countLetters seems has some issues. You can not convert String to lowercase by just calling
this.input.toLowerCase();
Because String is immutable in java. You have to assign it like:
this.input = input.toLowerCase();
Another problem is you are using input variable from parameter instead of this.input which has lower case string. You can do this way to make work countLetters method:
public void countLetters(String input) {
this.input = input.toLowerCase();
for ( int i = 0; i < this.input.length(); i++ ) {
char ch= this.input.charAt(i);
if (ch >= 97 && ch <= 122) {
alphabetArray[ch-'a']++;
}
}
}

C++ code for Delphi `in` set operator

I could not fully understand set membership in the help files. Please explain how in is handled in C++ for the following code:
if s1[1] in['0'..'9'] then
begin
ii := StrToInt(s1)+1;
s1 := IntToStr(ii);
if Length(s1)<2 then s1 := '0'+s1;
Edit_deneyismi.text := copy(s,1,i)+s1;
end
else Edit_deneyismi.text := 'Yeni_Deney_01';
Delphi sets are implemented in C++Builder using the Set<> template class, which has a Contains() method to support in operations, eg:
Set<char, '0', '9'> Digits;
for (char c = '0'; c <= '9'; ++c)
Digits << c;
if (Digits.Contains(s1[1]))
{
ii = StrToInt(s1)+1;
s1 = IntToStr(ii);
if (s1.Length() < 2) s1 = "0" + s1;
Edit_deneyismi->Text = s.SubString(1, i) + s1;
}
else
Edit_deneyismi->Text = "Yeni_Deney_01";
Otherwise, use the C isdigit() function, or the RTL Character::IsDigit() function. Or just compare the char values manually like Michael suggested.

How can I do mod without a mod operator?

This scripting language doesn't have a % or Mod(). I do have a Fix() that chops off the decimal part of a number. I only need positive results, so don't get too robust.
Will
// mod = a % b
c = Fix(a / b)
mod = a - b * c
do? I'm assuming you can at least divide here. All bets are off on negative numbers.
a mod n = a - (n * Fix(a/n))
For posterity, BrightScript now has a modulo operator, it looks like this:
c = a mod b
If someone arrives later, here are some more actual algorithms (with errors...read carefully)
https://eprint.iacr.org/2014/755.pdf
There are actually two main kind of reduction formulae: Barett and Montgomery. The paper from eprint repeat both in different versions (algorithms 1-3) and give an "improved" version in algorithm 4.
Overview
I give now an overview of the 4. algorithm:
1.) Compute "A*B" and Store the whole product in "C" that C and the modulus $p$ is the input for that algorithm.
2.) Compute the bit-length of $p$, say: the function "Width(p)" returns exactly that value.
3.) Split the input $C$ into N "blocks" of size "Width(p)" and store each in G. Start in G[0] = lsb(p) and end in G[N-1] = msb(p). (The description is really faulty of the paper)
4.) Start the while loop:
Set N=N-1 (to reach the last element)
precompute $b:=2^{Width(p)} \bmod p$
while N>0 do:
T = G[N]
for(i=0; i<Width(p); i++) do: //Note: that counter doesn't matter, it limits the loop)
T = T << 1 //leftshift by 1 bit
while is_set( bit( T, Width(p) ) ) do // (N+1)-th bit of T is 1
unset( bit( T, Width(p) ) ) // unset the (N+1)-th bit of T (==0)
T += b
endwhile
endfor
G[N-1] += T
while is_set( bit( G[N-1], Width(p) ) ) do
unset( bit( G[N-1], Width(p) ) )
G[N-1] += b
endwhile
N -= 1
endwhile
That does alot. Not we only need to recursivly reduce G[0]:
while G[0] > p do
G[0] -= p
endwhile
return G[0]// = C mod p
The other three algorithms are well defined, but this lacks some information or present it really wrong. But it works for any size ;)
What language is it?
A basic algorithm might be:
hold the modulo in a variable (modulo);
hold the target number in a variable (target);
initialize modulus variable;
while (target > 0) {
if (target > modulo) {
target -= modulo;
}
else if(target < modulo) {
modulus = target;
break;
}
}
This may not work for you performance-wise, but:
while (num >= mod_limit)
num = num - mod_limit
In javascript:
function modulo(num1, num2) {
if (num2 === 0 || isNaN(num1) || isNaN(num2)) {
return NaN;
}
if (num1 === 0) {
return 0;
}
var remainderIsPositive = num1 >= 0;
num1 = Math.abs(num1);
num2 = Math.abs(num2);
while (num1 >= num2) {
num1 -= num2
}
return remainderIsPositive ? num1 : 0 - num1;
}

Resources