UIView alpha value issues - ios

I have a small method that I am calling to draw stars progressively as a game moves on. Here is the code:`
-(void)stars{
for (int i = 0; i < (ScoreNumber * 3); i++){
int starX = ((arc4random() % (320 - 0 + 1)) + 0);
int starY = ((arc4random() % (640 - 0 + 1)) + 0);
int starSize = ((arc4random() % (1 - 0 + 1)) + 1);
UIView *stars = [[UIView alloc] initWithFrame:CGRectMake(starX,starY, starSize, starSize)];
stars.alpha = (i / 5);
stars.backgroundColor = [UIColor whiteColor];
[self.view addSubview:stars];
}
}
The stars do show but each iteration through the loop it bugs out another UIImageView (main character) and resets it's position. Also the alpha values appear to not work at all and it appears to only use the value of 1 (full showing). Any advice (for a new programmer) would be appreciated.

i is an integer in this case so the result will always be rounded to the nearest whole number. 0 while i < 5. Otherwise 1, 2, 3, etc. Instead you might want:
stars.alpha = (CGFloat)i / 5.0;
Although alpha will still be 1.0 or more after i >= 5.
Maybe you meant something like:
stars.alpha = 0.20 + (CGFloat)((i % 5) / 5.0;
That will give your stars alpha values between 0.2 and 1.0.

The problem is that only the first 5 stars will have an alpha less than one:
-(void)stars{
for (int i = 0; i < (ScoreNumber * 3); i++){
int starX = ((arc4random() % (320 - 0 + 1)) + 0);
int starY = ((arc4random() % (640 - 0 + 1)) + 0);
int starSize = ((arc4random() % (1 - 0 + 1)) + 1);
UIView *stars = [[UIView alloc] initWithFrame:CGRectMake(starX,starY, starSize, starSize)];
stars.alpha = (i / 5); // ONCE THIS IS 5 (LIKELY WON'T TAKE LONG), ALPHA WILL BE 1 FOR ALL YOUR STARS
stars.backgroundColor = [UIColor whiteColor];
[self.view addSubview:stars];
}
}
Also, if a star is added to the superview on top of a current star and its alpha is actually less than 1, it will appear like it has more alpha than it actually does.
One fix might be to change 5 to a something bigger, like 25 or 50. It's hard to know what would be appropriate without knowing how big ScoreNumber can be.
Edit:
Also, just realized another problem: you're dividing an int by an int, so alpha will be an int (not what you want). If you change the 5 to 5.0 (or 25.0 or 50.0), you'll get a float.
Hope it helps!

Related

Google Maps heat map color by average weight

The Google Maps iOS SDK's heat map (more specifically the Google-Maps-iOS-Utils framework) decides the color to render an area in essentially by calculating the density of the points in that area.
However, I would like to instead select the color based on the average weight or intensity of the points in that area.
From what I understand, this behavior is not built in (but who knows––the documentation sort of sucks). The file where the color-picking is decided is I think in /src/Heatmap/GMUHeatmapTileLayer.mThis is a relatively short file, but I am not very well versed in Objective-C, so I am having some difficulty figuring out what does what. I think -tileForX:y:zoom: in GMUHeatmapTileLayer.m is the important function, but I'm not sure and even if it is, I don't quite know how to modify it. Towards the end of this method, the data is 'convolved' first horizontally and then vertically. I think this is where the intensities are actually calculated. Unfortunately, I do not know exactly what it's doing, and I am afraid of changing things because I suck at obj-c. This is what the convolve parts of this method look like:
- (UIImage *)tileForX:(NSUInteger)x y:(NSUInteger)y zoom:(NSUInteger)zoom {
// ...
// Convolve data.
int lowerLimit = (int)data->_radius;
int upperLimit = paddedTileSize - (int)data->_radius - 1;
// Convolve horizontally first.
float *intermediate = calloc(paddedTileSize * paddedTileSize, sizeof(float));
for (int y = 0; y < paddedTileSize; y++) {
for (int x = 0; x < paddedTileSize; x++) {
float value = intensity[y * paddedTileSize + x];
if (value != 0) {
// convolve to x +/- radius bounded by the limit we care about.
int start = MAX(lowerLimit, x - (int)data->_radius);
int end = MIN(upperLimit, x + (int)data->_radius);
for (int x2 = start; x2 <= end; x2++) {
float scaledKernel = value * [data->_kernel[x2 - x + data->_radius] floatValue];
// I THINK THIS IS WHERE I NEED TO MAKE THE CHANGE
intermediate[y * paddedTileSize + x2] += scaledKernel;
// ^
}
}
}
}
free(intensity);
// Convole vertically to get final intensity.
float *finalIntensity = calloc(kGMUTileSize * kGMUTileSize, sizeof(float));
for (int x = lowerLimit; x <= upperLimit; x++) {
for (int y = 0; y < paddedTileSize; y++) {
float value = intermediate[y * paddedTileSize + x];
if (value != 0) {
int start = MAX(lowerLimit, y - (int)data->_radius);
int end = MIN(upperLimit, y + (int)data->_radius);
for (int y2 = start; y2 <= end; y2++) {
float scaledKernel = value * [data->_kernel[y2 - y + data->_radius] floatValue];
// I THINK THIS IS WHERE I NEED TO MAKE THE CHANGE
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
// ^
}
}
}
}
free(intermediate);
// ...
}
This is the method where the intensities are calculated for each iteration, right? If so, how can I change this to achieve my desired effect (average, not summative colors, which I think are proportional to intensity).
So: How can I have averaged instead of summed intensities by modifying the framework?
I think you are on the right track. To calculate average you divide the point sum by the point count. Since you already have the sums calculated, I think an easy solution would be to also save the count for each point. If I understand it correctly, this it what you have to do.
When allocating memory for the sums also allocate memory for the counts
// At this place
float *intermediate = calloc(paddedTileSize * paddedTileSize, sizeof(float));
// Add this line, calloc will initialize them to zero
int *counts = calloc(paddedTileSize * paddedTileSize, sizeof(int));
Then increase the count in each loop.
// Below this line (first loop)
intermediate[y * paddedTileSize + x2] += scaledKernel;
// Add this
counts[y * paddedTileSize + x2]++;
// And below this line (second loop)
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
// Add this
counts[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit]++;
After the two loops you should have two arrays, one with your sums finalIntensity and one with your counts counts. Now go through the values and calculate the averages.
for (int y = 0; y < paddedTileSize; y++) {
for (int x = 0; x < paddedTileSize; x++) {
int n = y * paddedTileSize + x;
if (counts[n] != 0)
finalIntensity[n] = finalIntensity[n] / counts[n];
}
}
free(counts);
The finalIntensity should now contain your averages.
If you prefer, and the rest of the code makes it possible, you can skip the last loop and instead do the division when using the final intensity values. Just change any subsequent finalIntensity[n] to counts[n] == 0 ? finalIntensity[n] : finalIntensity[n] / counts[n].
I may have just solved the same issue for the java version.
My problem was having a custom gradient with 12 different values.
But my actual weighted data does not necessarily contain all intensity values from 1 to 12.
The problem is, the highest intensity value gets mapped to the highest color.
Also 10 datapoints with intensity 1 that are close by will get the same color as a single point with intensity 12.
So the function where the tile gets created is a good starting point:
Java:
public Tile getTile(int x, int y, int zoom) {
// ...
// Quantize points
int dim = TILE_DIM + mRadius * 2;
double[][] intensity = new double[dim][dim];
int[][] count = new int[dim][dim];
for (WeightedLatLng w : points) {
Point p = w.getPoint();
int bucketX = (int) ((p.x - minX) / bucketWidth);
int bucketY = (int) ((p.y - minY) / bucketWidth);
intensity[bucketX][bucketY] += w.getIntensity();
count[bucketX][bucketY]++;
}
// Quantize wraparound points (taking xOffset into account)
for (WeightedLatLng w : wrappedPoints) {
Point p = w.getPoint();
int bucketX = (int) ((p.x + xOffset - minX) / bucketWidth);
int bucketY = (int) ((p.y - minY) / bucketWidth);
intensity[bucketX][bucketY] += w.getIntensity();
count[bucketX][bucketY]++;
}
for(int bx = 0; bx < dim; bx++)
for (int by = 0; by < dim; by++)
if (count[bx][by] != 0)
intensity[bx][by] /= count[bx][by];
//...
I added a counter and count every addition to the intensities, after that I go through every intensity and calculate the average.
For C:
- (UIImage *)tileForX:(NSUInteger)x y:(NSUInteger)y zoom:(NSUInteger)zoom {
//...
// Quantize points.
int paddedTileSize = kGMUTileSize + 2 * (int)data->_radius;
float *intensity = calloc(paddedTileSize * paddedTileSize, sizeof(float));
int *count = calloc(paddedTileSize * paddedTileSize, sizeof(int));
for (GMUWeightedLatLng *item in points) {
GQTPoint p = [item point];
int x = (int)((p.x - minX) / bucketWidth);
// Flip y axis as world space goes south to north, but tile content goes north to south.
int y = (int)((maxY - p.y) / bucketWidth);
// If the point is just on the edge of the query area, the bucketing could put it outside
// bounds.
if (x >= paddedTileSize) x = paddedTileSize - 1;
if (y >= paddedTileSize) y = paddedTileSize - 1;
intensity[y * paddedTileSize + x] += item.intensity;
count[y * paddedTileSize + x] ++;
}
for (GMUWeightedLatLng *item in wrappedPoints) {
GQTPoint p = [item point];
int x = (int)((p.x + wrappedPointsOffset - minX) / bucketWidth);
// Flip y axis as world space goes south to north, but tile content goes north to south.
int y = (int)((maxY - p.y) / bucketWidth);
// If the point is just on the edge of the query area, the bucketing could put it outside
// bounds.
if (x >= paddedTileSize) x = paddedTileSize - 1;
if (y >= paddedTileSize) y = paddedTileSize - 1;
// For wrapped points, additional shifting risks bucketing slipping just outside due to
// numerical instability.
if (x < 0) x = 0;
intensity[y * paddedTileSize + x] += item.intensity;
count[y * paddedTileSize + x] ++;
}
for(int i=0; i < paddedTileSize * paddedTileSize; i++)
if (count[i] != 0)
intensity[i] /= count[i];
Next is the convolving.
What I did there, is to make sure that the calculated value does not go over the maximum in my data.
Java:
// Convolve it ("smoothen" it out)
double[][] convolved = convolve(intensity, mKernel, mMaxAverage);
// the mMaxAverage gets set here:
public void setWeightedData(Collection<WeightedLatLng> data) {
// ...
// Add points to quad tree
for (WeightedLatLng l : mData) {
mTree.add(l);
mMaxAverage = Math.max(l.getIntensity(), mMaxAverage);
}
// ...
// And finally the convolve method:
static double[][] convolve(double[][] grid, double[] kernel, double max) {
// ...
intermediate[x2][y] += val * kernel[x2 - (x - radius)];
if (intermediate[x2][y] > max) intermediate[x2][y] = max;
// ...
outputGrid[x - radius][y2 - radius] += val * kernel[y2 - (y - radius)];
if (outputGrid[x - radius][y2 - radius] > max ) outputGrid[x - radius][y2 - radius] = max;
For C:
// To get the maximum average you could do that here:
- (void)setWeightedData:(NSArray<GMUWeightedLatLng *> *)weightedData {
_weightedData = [weightedData copy];
for (GMUWeightedLatLng *dataPoint in _weightedData)
_maxAverage = Math.max(dataPoint.intensity, _maxAverage)
// ...
// And then simply in the convolve section
intermediate[y * paddedTileSize + x2] += scaledKernel;
if (intermediate[y * paddedTileSize + x2] > _maxAverage)
intermediate[y * paddedTileSize + x2] = _maxAverage;
// ...
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
if (finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] > _maxAverage)
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] = _maxAverage;
And finally the coloring
Java:
// The maximum intensity is simply the size of my gradient colors array (or the starting points)
Bitmap bitmap = colorize(convolved, mColorMap, mGradient.mStartPoints.length);
For C:
// Generate coloring
// ...
float max = [data->_maxIntensities[zoom] floatValue];
max = _gradient.startPoints.count;
I did this in Java and it worked for me, not sure about the C-code though.
You have to play around with the radius and you could even edit the kernel. Because I found that when I have a lot of homogeneous data (i.e. little variation in the intensities, or a lot of data in general) the heat map will degenerate to a one-colored overlay, because the gradient on the edges will get smaller and smaller.
But hope this helps anyway.
// Erik

Failing to properly initialize a 2D texture from memory in Direct3D 11

I am trying to produce a simple array in system memory that represent a R8G8B8A8 texture and than transfer that texture to the GPU memory.
First, I allocate an array and fill it with the desired green color data:
frame.width = 3;
frame.height = 1;
auto components = 4;
auto length = components * frame.width * frame.height;
frame.data = new uint8_t[length];
frame.data[0 + 0 * frame.width] = 0; frame.data[1 + 0 * frame.width] = 255; frame.data[2 + 0 * frame.width] = 0; frame.data[3 + 0 * frame.width] = 255;
frame.data[0 + 1 * frame.width] = 0; frame.data[1 + 1 * frame.width] = 255; frame.data[2 + 1 * frame.width] = 0; frame.data[3 + 1 * frame.width] = 255;
frame.data[0 + 2 * frame.width] = 0; frame.data[1 + 2 * frame.width] = 255; frame.data[2 + 2 * frame.width] = 0; frame.data[3 + 2 * frame.width] = 255;
Then, I create the texture object and set it as the pixel shader resource:
D3D11_TEXTURE2D_DESC textureDescription;
textureDescription.Width = frame.width;
textureDescription.Height = frame.height;
textureDescription.MipLevels = textureDescription.ArraySize = 1;
textureDescription.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
textureDescription.SampleDesc.Count = 1;
textureDescription.SampleDesc.Quality = 0;
textureDescription.Usage = D3D11_USAGE_DYNAMIC;
textureDescription.BindFlags = D3D11_BIND_SHADER_RESOURCE;
textureDescription.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
textureDescription.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initialTextureData;
initialTextureData.pSysMem = frame.data;
initialTextureData.SysMemPitch = frame.width * components;
initialTextureData.SysMemSlicePitch = 0;
DX_CHECK(m_device->CreateTexture2D(&textureDescription, &initialTextureData, &m_texture));
DX_CHECK(m_device->CreateShaderResourceView(m_texture, NULL, &m_textureView));
m_context->PSSetShaderResources(0, 1, &m_textureView);
My expectation is that the GPU memory will contain a 3x1 green texture and that each texel will have 1.0f in the alpha chanel. However, this is not the case as can be viewed by examining the loaded texture object via the Visual Studio Graphics Debugger:
Could someone explain what is happening? How can I fix this?
Let's take a look at your array addressing scheme (indices are reduced with the dimensions you provided):
frame.data[0] = 0; frame.data[1] = 255; frame.data[2] = 0; frame.data[3] = 255;
frame.data[3] = 0; frame.data[4] = 255; frame.data[5] = 0; frame.data[6] = 255;
frame.data[6] = 0; frame.data[7] = 255; frame.data[8] = 0; frame.data[9] = 255;
Re-ordering, we get
data[ 0] = 0 B pixel 1
data[ 1] = 255 G pixel 1
data[ 2] = 0 R pixel 1
data[ 3] = 0 (overwritten) A pixel 1
data[ 4] = 255 pixel 2
data[ 5] = 0
data[ 6] = 0
data[ 7] = 255
data[ 8] = 0 pixel 3
data[ 9] = 255
data[10] = undefined
data[11] = undefined
As you see, this is exactly the data that your debugger shows you.
So you just need to modify your adressing scheme. The correct formula would be:
index = component + x * components + y * pitch,
where you defined a dense packing with
pitch = width * components
In order to derive this formula, you just need to think about how many indices you have to skip when you increase one of the variables. E.g. when you increase the current component, you just need to step one entry further (because all components are right next to each other). On the other hand, if you increase the y-coordinate, you need to skip as many entries as there are in a row (this is called the pitch, which is the width of the image multiplied by the number of components for a dense packing).

iPhone 6 compare two UIColors

I have two colors here is a log of the instances:
(lldb) po acolor
UIDeviceRGBColorSpace 0.929412 0.133333 0.141176 1
(lldb) po hexColor
UIDeviceRGBColorSpace 0.929412 0.133333 0.141176 1
I have this code that works for iPhone 4s 5 but not for iPhone 6:
if ([acolor isEqual:hexColor])
{
// other code here.
}
As additional I create this acolor from pixel of image:
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
But seems there is some differences for values between iPhone 4 and iPhone 6:
for example if I print red on iPhone 6 it has:
(lldb) po red
0.92941176470588238
and for iPhone 4:
(lldb) po red
0.929411768
The values are different because of 32 and 64 bits architecture as I think.
But as we see at colors they seems have right values and seems they rounded. but it never compare it with success. So acolor is not equal hexColor. Never. only for iPhone 4 and 5.
Of course I can use float type instead of CGFloat. But just noticed that rounding seems work and UIColors has the same values for different devices. But comparing does not work.
I get hexColor using this methods:
+ (UIColor *)colorWithHexString:(NSString *)hexString
{
const char *cStr = [hexString cStringUsingEncoding:NSASCIIStringEncoding];
long x = strtol(cStr, NULL, 16);
return [UIColor colorWithHex:(UInt32)x];
}
+ (UIColor *)colorWithHex:(UInt32)col {
unsigned char r, g, b;
b = col & 0xFF;
g = (col >> 8) & 0xFF;
r = (col >> 16) & 0xFF;
return [UIColor colorWithRed:(float)r/255.0f
green:(float)g/255.0f
blue:(float)b/255.0f
alpha:1];
}
Did you try to round both to the same decimal count and then compare them?
For example what happens if you round both 9 decimals?
float rounded = roundf (original * 1000000000) / 1000000000.0;
This piece of code is adapted from alastair's post in this question: Round a float value to two post decimal positions

How to make random height with some space between previous and next height in Objective-C?

I use this code in Objective-C to generate a random height between 100 and 1000.
My problem is, that new height is often near to the previous one and that is not so nice.
So how to make it so, that there is always some space (50px, for example) between previous and next height?
_randomHeight = arc4random() % (1000-100+1);
You just have to keep generating values until a value meets your requirement:
CGFloat height;
do {
height = arc4random() % (1000-100+1);
} while (fabs(_randomHeight - height) < 50.0f);
_randomHeight = height;
This will Give You correct height
float _randomHight = arc4random() % 900 + 101;
Here is a solution to produce values between 100 and 1000, distant of 50 at least, and using only one call to arc4random to guarantee a fixed execution time:
/// Return a new random height between 100 and 1000, at least 50 space from previous one.
+ (uint32_t)randomHeight {
/// initial value is given equiprobability (1000 - 100 + 50)
static uint32_t _randomHeight = 950;
uint32_t lowValues = MAX(_randomHeight - 50, 0);
uint32_t highValues = MAX(850 - _randomHeight, 0);
uint32_t aRandom = arc4random_uniform(lowValues + highValues + 1);
_randomHeight = aRandom < lowValues ? aRandom : _randomHeight + 50 + aRandom - lowValues;
return _randomHeight + 100;
}

CGRectIntersectsRect for multiple CGRect

I have 8 UIImageView, which have to be placed randomly. I generate a random x,y pos for each imageView, then I need to check if any of the imageViews are intersecting. If they are intersecting, it goes back to calculating random x,y pos again(do..while loop). Now the only method I know of is CGRectIntersectsRect, which can only compare 2 CGRect. Is there a way I can check if all those imageViews intersect at once (inside the while condition)?
Here's what I already worked out for 3 images-
do {
xpos1 = 60 + arc4random() % (960 - 60 + 1);
ypos1 = 147 + arc4random() % (577 - 147 + 1);
xpos2 = 60 + arc4random() % (960 - 60 + 1);
ypos2 = 147 + arc4random() % (577 - 147 + 1);
xpos3 = 60 + arc4random() % (960 - 60 + 1);
ypos3 = 147 + arc4random() % (577 - 147 + 1);
} while (CGRectIntersectsRect(CGRectMake(xpos1, ypos1,120, 120), CGRectMake(xpos2, ypos2,120, 120)) || CGRectIntersectsRect(CGRectMake(xpos2, ypos2,120,120), CGRectMake(xpos3, ypos3, 120, 120)) || CGRectIntersectsRect(CGRectMake(xpos1, ypos1,120,120), CGRectMake(xpos3, ypos3, 120, 120)) );
image1.center=CGPointMake(xpos1, ypos1);
image2.center=CGPointMake(xpos2, ypos2);
image3.center=CGPointMake(xpos3, ypos3);
A simple algorithm would be to start with one rectangle, and then iteratively find new rectangles
that do not intersect with any of the previous ones:
int numRects = 8;
CGFloat xmin = 60, xmax = 960, ymin = 147, ymax = 577;
CGFloat width = 120, height = 120;
CGRect rects[numRects];
for (int i = 0; i < numRects; i++) {
bool intersects;
do {
// Create random rect:
CGFloat x = xmin + arc4random_uniform(xmax - xmin + 1);
CGFloat y = ymin + arc4random_uniform(ymax - ymin + 1);
rects[i] = CGRectMake(x, y, width, height);
// Check if it intersects with one of the previous rects:
intersects = false;
for (int j = 0; j < i; j++) {
if (CGRectIntersectsRect(rects[i], rects[j])) {
intersects = true;
break;
}
}
// repeat until new rect does not intersect with previous rects:
} while (intersects);
}
This should answer your question ("how to check for intersection with multiple rectangles"),
but note that this method is not perfect. If the rectangles would fill "much" of the
available space and the first rectangles are placed "badly" then the algorithm might not
terminate because it cannot find an admissible rectangle at some point.
I don't think that can happen with the dimensions used in your case, but you might keep that
in mind. A possible solution could be to count the number of tries that were made, and if
it takes too long than start over from the beginning.
Also, if you have to create many rectangles then the inner loop (that checks for the
intersection) can be improved by sorting the rectangles, so that less comparisons have to
be made.
Say you have generated point
CGFloat x = (CGFloat) (arc4random() % (int) self.view.bounds.size.width);
CGFloat y = (CGFloat) (arc4random() % (int) self.view.bounds.size.height);
CGPoint point=CGPointMake(x, y);
while ([self checkPointExist:point]) {
x = (CGFloat) (arc4random() % (int) self.view.bounds.size.width);
y = (CGFloat) (arc4random() % (int) self.view.bounds.size.height);
point=CGPointMake(x, y);
}
-(BOOL)checkPointExist:(CGPoint)point{
for(UIView *aView in [self.view subviews])
{
if(CGRectContainsPoint(aView.frame, point))
{
return TRUE;// There is already imageview. generate another point
}
}
return FALSE;
}

Resources