Assign value is garbage or undefined - ios

I have posted screenshot of my error code.
heights output
please any one can help me?

I think the static analyzer is not seeing how _numberOfColumns can become non-zero, and hence its insistence that garbage is being assigned. You need to check that you are actually providing some means for _numberOfColumns to become non-zero.
Generally when I am writing loops that want to find the largest or the smallest value, I initialize the size variable to the largest (if I want the smallest) or smallest (if I want the largest) amount, and I think this will solve most of your issues:
float shortestHeight = FLT_MAX;
for (unsigned i = 0; i < _numberOfColumns; i++)
{
// etc.
}

The analyzer is correct. Your code will access garbage memory if _numberOfColumns is 0, thus allocating 0 bytes for heights, making heights[0] garbage. The analyzer doesn't know what values _numberOfColumns can have, but you can tell it by using assert(_numberOfColumns>0).
Take this C program for example:
int main(int argc, const char * argv[])
{
int n = argc-1;
int *a = malloc(n*sizeof(int));
for (int i=0; i<n; i++) {
a[i] = i;
}
int foo = a[0];
free(a);
return foo;
}
the size of a is determined by the number of arguments. If you have no arguments n == 0. If you are sure that your program (or just that part of your program) will always assign something greater than 0 to a, you can use an assertion. Adding assert(n>0) will tell the analyzer exactly that.

Related

DirectX compute shader: how to write a function with variable array size argument?

I'm trying to write a function within a compute shader (HLSL) that accept an argument being an array on different size. The compiler always reject it.
Example (not working!):
void TestFunc(in uint SA[])
{
int K;
for (K = 0; SA[K] != 0; K++) {
// Some code using SA array
}
}
[numthreads(1, 1, 1)]
void CSMain(
uint S1[] = {1, 2, 3, 4 }; // Compiler happy and discover the array size
uint S2[] = {10, 20}; // Compiler happy and discover the array size
TestFunc(S1);
TestFunc(S2);
}
If I give an array size in TestFunc(), then the compiler is happy when calling TestFunc() passing that specific array size but refuse the call for another size.
You cannot have function parameters of indeterminate size.
You need to initialize an array of know length, and an int variable that holds the array length.
void TestFunc(in uint SA[4], in uint saCount)
{ int K;
for (K = 0; SA[K] != 0; K++)
{
// Some code using SA array, saCount is your array length;
}
}
[numthreads(1, 1, 1)]
void CSMain()
{
uint S1count = 4;
uint S1[] = {1, 2, 3, 4 };
uint S2count = 2;
uint S2[] = {10, 20,0,0};
TestFunc(S1, S1count);
TestFunc(S2, S2count);
}
In my example I have set your array max size as 4, but you can set it bigger if needed. You can also set multiple functions for different array lengths, of set up multiple passes if your data overflows your array max size.
Edit to answer comment
The issue is that array dimensions of function parameters must be explicit as the compiler error states. This cannot be avoided. What you can do however, is avoid passing the array at all. If you in-line your TestFunc in your CSMain, you avoid passing the array and your routine compiles and runs. I know it can make your code longer and harder to maintain, but it's the only way to do what you want with an array of unspecified length. The advantage is that this way you have access to array.Length that might make your code simpler.

Xcode Optimizer Bug or stupid mistake?

This code mimics some image processing with malloced memory, it's a distilled example of a problem. It runs fine if optimized at other levels including "Fastest Smallest", but fails on GCC_OPTIMIZATION_LEVEL = 3 AKA Fastest [-03] and Fastest Aggressive. It crashes only on the device, seen on a 6,5s,5 and various IOS 9.3, 8.4.
There's something about the allocation sizes that aggravates the issue. There are some notes in the code about what helps make it fail.
Reproduce by creating an single view app project, set the optimization level to "Fastest" and paste this code into main and call it from inside the autorelease pool, or paste it in the view controller and call it from viewDidLoad or anywhere you like.
The debugger isn't very useful with optimizations turned on, but the crash comes in the while loop at "*writeIter = readIter->d;" a EXC_BAD_ACCESS code=1
So that tells me it's reading and the address that triggers the EXC_BAD_ACCESS is the same as readEnd. That should never happen as that's the condition the while is supposed to prevent... optimizer bug or stupid mistake?
#import <stdlib.h>
#import <stdio.h>
/**
Requires this to fail -> GCC_OPTIMIZATION_LEVEL = 3
This won't do it -> GCC_OPTIMIZATION_LEVEL = s
*/
typedef struct {
unsigned char a, b, c, d;
} foo;
void boom()
{
char* memory[1000];
// these sizes are important to reproducing this issue, changing them by +-1 will make it go away
int height = 960; //480,960,1920
int width = 1280; //640,1280,2560
int depth = sizeof(foo);
printf("height = %d, width = %d, total = %d\n\n", height, width, height*width*depth);
for (int i = 0; i < 1000; ++i)
{
memory[i] = malloc(20000); // allocate memory to force the allocations of readBuf and writeBuf to move, numbers
// less than 15k don't effect the alloced addresses of the bufs, so we keep getting
// the same ones and no boom.
foo* readBuf = malloc(height*width*depth);
unsigned char* writeBuf = malloc(height*width); // smaller than read
foo *readIter = readBuf;
foo *readEnd = readBuf + height*width; // only read size of smaller
unsigned char* writeIter = writeBuf;
printf("test: i = %d, readIter = %p, readEnd = %p, writeIter = %p\n", i, readIter, readEnd, writeIter);
while (readIter < readEnd)
{
*writeIter = readIter->d; // you died here during a read, and readIter == readEnd, look at the EXC_BAD_ACCESS address
// (printfed) it's readEnd, and that isn't supposed to happen with the conditional.
++writeIter;
++readIter;
}
free(readBuf);
free(writeBuf);
}
for (int i = 0; i < 1000; ++i)
{
free(memory[i]);
}
}

How do I allocate an array at runtime in Rust?

Once I have allocated the array, how do I manually free it? Is pointer arithmetic possible in unsafe mode?
Like in C++:
double *A=new double[1000];
double *p=A;
int i;
for(i=0; i<1000; i++)
{
*p=(double)i;
p++;
}
delete[] A;
Is there any equivalent code in Rust?
Based on your question, I'd recommend reading the Rust Book if you haven't done so already. Idiomatic Rust will almost never involve manually freeing memory.
As for the equivalent to a dynamic array, you want a vector. Unless you're doing something unusual, you should avoid pointer arithmetic in Rust. You can write the above code variously as:
// Pre-allocate space, then fill it.
let mut a = Vec::with_capacity(1000);
for i in 0..1000 {
a.push(i as f64);
}
// Allocate and initialise, then overwrite
let mut a = vec![0.0f64; 1000];
for i in 0..1000 {
a[i] = i as f64;
}
// Construct directly from iterator.
let a: Vec<f64> = (0..1000).map(|n| n as f64).collect();
It is completely possible to allocate a fixed-sized array on the heap:
let a = Box::new([0.0f64; 1000]);
Because of deref coercion, you can still use this as an array:
for i in 0..1000 {
a[i] = i as f64;
}
You can manually free it by doing:
std::mem::drop(a);
drop takes ownership of the array, so this is completely safe. As mentioned in the other answer, it is almost never necessary to do this, the box will be freed automatically when it goes out of scope.

Any good idea for OpenCL atom_inc separation?

I want to count the total non-zero points number in an image using OpenCL.
Since it is an adding work, I used the atom_inc.
And the kernel code is shown here.
__kernel void points_count(__global unsigned char* image_data, __global int* total_number, __global int image_width)
{
size_t gidx = get_global_id(0);
size_t gidy = get_global_id(1);
if(0!=*(image_data+gidy*image_width+gidx))
{
atom_inc(total_number);
}
}
My question is, by using atom_inc it will be much redundant right?
Whenever we meet a non-zero point, we should wait for the atom_inc.
I have a idea like this, we can separate the whole row into hundreds groups, we find the number in different groups and add them at last.
If we can do something like this:
__kernel void points_count(__global unsigned char* image_data, __global int* total_number_array, __global int image_width)
{
size_t gidx = get_global_id(0);
size_t gidy = get_global_id(1);
if(0!=*(image_data+gidy*image_width+gidx))
{
int stepy=gidy%10;
atom_inc(total_number_array+stepy);
}
}
We will separate the whole problem into more groups.
In that case, we can add the numbers in the total_number_array one by one.
Theoretically speaking, it will have a great performance improvement right?
So, does anyone have some advice about the summing issue here?
Thanks!
Like mentioned in the comments this is a reduction problem.
The idea is to keep separate counts and then put them back together at the end.
Consider using local memory to store the values.
Declare a local buffer to be used by each work group.
Keep track of the number of occurrences in this buffer by using the local_id as the index.
Sum these values at the end of execution.
A very good introduction to the reduction problem using Opencl is shown here:
http://developer.amd.com/resources/documentation-articles/articles-whitepapers/opencl-optimization-case-study-simple-reductions/
The reduction kernel could look like this (taken from the link above):
__kernel
void reduce(
__global float* buffer,
__local float* scratch,
__const int length,
__global float* result) {
int global_index = get_global_id(0);
int local_index = get_local_id(0);
// Load data into local memory
if (global_index < length) {
scratch[local_index] = buffer[global_index];
} else {
// Infinity is the identity element for the min operation
scratch[local_index] = INFINITY;
}
barrier(CLK_LOCAL_MEM_FENCE);
for(int offset = get_local_size(0) / 2;
offset > 0;
offset >>= 1) {
if (local_index < offset) {
float other = scratch[local_index + offset];
float mine = scratch[local_index];
scratch[local_index] = (mine < other) ? mine : other;
}
barrier(CLK_LOCAL_MEM_FENCE);
}
if (local_index == 0) {
result[get_group_id(0)] = scratch[0];
}
}
For further explanation see the proposed link.

Odd atoi(char *) issue

I'm experiencing a very odd issue with atoi(char *). I'm trying to convert a char into it's numerical representation (I know that it is a number), which works perfectly fine 98.04% of the time, but it will give me a random value the other 1.96% of the time.
Here is the code I am using to test it:
int increment = 0, repetitions = 10000000;
for(int i = 0; i < repetitions; i++)
{
char randomNumber = (char)rand()%10 + 48;
int firstAtoi = atoi(&randomNumber);
int secondAtoi = atoi(&randomNumber);
if(firstAtoi != secondAtoi)NSLog(#"First: %d - Second: %d", firstAtoi, secondAtoi);
if(firstAtoi > 9 || firstAtoi < 0)
{
increment++;
NSLog(#"First Atoi: %d", firstAtoi);
}
}
NSLog(#"Ratio Percentage: %.2f", 100.0f * (float)increment/(float)repetitions);
I'm using the GNU99 C Language Dialect in XCode 4.6.1. The first if (for when the first number does not equal the second) never logs, so the two atoi's return the same result every time, however, the results are different every time. The "incorrect results" seemingly range from -1000 up to 10000. I haven't seen any above 9999 or any below -999.
Please let me know what I am doing wrong.
EDIT:
I have now changed the character design to:
char numberChar = (char)rand()%10 + 48;
char randomNumber[2];
randomNumber[0] = numberChar;
randomNumber[1] = 0;
However, I am using:
MAX(MIN((int)(myCharacter - '0'), 9), 0)
to get the integer value.
I really appreciate all of the answers!
atoi expects a string. You have not given it a string, you have given it a single char. A string is defined as some number of characters ended by the null character. You are invoking UB.
From the docs:
If str does not point to a valid C-string, or if the converted value would be out of the range of values representable by an int, it causes undefined behavior.
Want to "convert" a character to its integral representation? Don't overcomplicate things;
int x = some_char;
A char is an integer already, not a string. Don't think of a single char as text.
If I'm not mistaken, atoi expects a null-terminated string (see the documentation here).
You're passing in a single stack-based value, which does not have to be null-terminated. I'm extremely surprised it's even getting it right: it could be reading off hundreds of garbage numbers into eternity, if it never finds a null-terminator. If you just want to get the number of a single char (as in, the numeric value of the char's human-readable representation), why don't you just do int numeric = randomNumber - 48 ?

Resources