add space around UITabelViewCell - ios

Im trying to add space around add UITabelViewCell in ios7 just like grouped table cell in ios6. Similar to facebook news-feed cell.
I have a custom UITableCell which i add to the tableView ( grouped). Inside cellForRowAtIndexPath i tried the following
int x = cell.frame.origin.x;
int y = cell.frame.origin.y;
int h = cell.frame.size.height;
int w = cell.frame.size.width;
NSLog(#" x : %i y : %i h = %i w = %i" , x, y , h, w);
CGRect newFram = cell.frame;
newFram.origin.x = 10;
cell.frame = newFram;
x = cell.frame.origin.x;
NSLog(#" x : %i y : %i h = %i w = %i" , x, y , h, w);
[[cell layer] setBorderWidth:1.0f];
I tried to change the x orgin of the cell. When i print 'x' , its says 0 first and 10 second time but the cell is always at 0.
2013-11-04 22:07:30.334 iosproj[6256:70b] x : 0 y : 0 h = 71 w = 320
2013-11-04 22:07:30.335 iosproj[6256:70b] x : 10 y : 0 h = 71 w = 320
2013-11-04 22:07:30.337 iosproj[6256:70b] x : 0 y : 0 h = 71 w = 320
2013-11-04 22:07:30.337 iosproj[6256:70b] x : 10 y : 0 h = 71 w = 320
Any help on this is appreciated.
Thanks,

This is not answering the question directly, but if you really wanted custom design for your view and cells, you could use a UICollectionView and UICollectionViewCells instead. You can then use a UICollectionViewLayout to handle the spacing between sections and cells. It's all very clear in the documentation for iOS.

Related

Google Maps heat map color by average weight

The Google Maps iOS SDK's heat map (more specifically the Google-Maps-iOS-Utils framework) decides the color to render an area in essentially by calculating the density of the points in that area.
However, I would like to instead select the color based on the average weight or intensity of the points in that area.
From what I understand, this behavior is not built in (but who knows––the documentation sort of sucks). The file where the color-picking is decided is I think in /src/Heatmap/GMUHeatmapTileLayer.mThis is a relatively short file, but I am not very well versed in Objective-C, so I am having some difficulty figuring out what does what. I think -tileForX:y:zoom: in GMUHeatmapTileLayer.m is the important function, but I'm not sure and even if it is, I don't quite know how to modify it. Towards the end of this method, the data is 'convolved' first horizontally and then vertically. I think this is where the intensities are actually calculated. Unfortunately, I do not know exactly what it's doing, and I am afraid of changing things because I suck at obj-c. This is what the convolve parts of this method look like:
- (UIImage *)tileForX:(NSUInteger)x y:(NSUInteger)y zoom:(NSUInteger)zoom {
// ...
// Convolve data.
int lowerLimit = (int)data->_radius;
int upperLimit = paddedTileSize - (int)data->_radius - 1;
// Convolve horizontally first.
float *intermediate = calloc(paddedTileSize * paddedTileSize, sizeof(float));
for (int y = 0; y < paddedTileSize; y++) {
for (int x = 0; x < paddedTileSize; x++) {
float value = intensity[y * paddedTileSize + x];
if (value != 0) {
// convolve to x +/- radius bounded by the limit we care about.
int start = MAX(lowerLimit, x - (int)data->_radius);
int end = MIN(upperLimit, x + (int)data->_radius);
for (int x2 = start; x2 <= end; x2++) {
float scaledKernel = value * [data->_kernel[x2 - x + data->_radius] floatValue];
// I THINK THIS IS WHERE I NEED TO MAKE THE CHANGE
intermediate[y * paddedTileSize + x2] += scaledKernel;
// ^
}
}
}
}
free(intensity);
// Convole vertically to get final intensity.
float *finalIntensity = calloc(kGMUTileSize * kGMUTileSize, sizeof(float));
for (int x = lowerLimit; x <= upperLimit; x++) {
for (int y = 0; y < paddedTileSize; y++) {
float value = intermediate[y * paddedTileSize + x];
if (value != 0) {
int start = MAX(lowerLimit, y - (int)data->_radius);
int end = MIN(upperLimit, y + (int)data->_radius);
for (int y2 = start; y2 <= end; y2++) {
float scaledKernel = value * [data->_kernel[y2 - y + data->_radius] floatValue];
// I THINK THIS IS WHERE I NEED TO MAKE THE CHANGE
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
// ^
}
}
}
}
free(intermediate);
// ...
}
This is the method where the intensities are calculated for each iteration, right? If so, how can I change this to achieve my desired effect (average, not summative colors, which I think are proportional to intensity).
So: How can I have averaged instead of summed intensities by modifying the framework?
I think you are on the right track. To calculate average you divide the point sum by the point count. Since you already have the sums calculated, I think an easy solution would be to also save the count for each point. If I understand it correctly, this it what you have to do.
When allocating memory for the sums also allocate memory for the counts
// At this place
float *intermediate = calloc(paddedTileSize * paddedTileSize, sizeof(float));
// Add this line, calloc will initialize them to zero
int *counts = calloc(paddedTileSize * paddedTileSize, sizeof(int));
Then increase the count in each loop.
// Below this line (first loop)
intermediate[y * paddedTileSize + x2] += scaledKernel;
// Add this
counts[y * paddedTileSize + x2]++;
// And below this line (second loop)
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
// Add this
counts[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit]++;
After the two loops you should have two arrays, one with your sums finalIntensity and one with your counts counts. Now go through the values and calculate the averages.
for (int y = 0; y < paddedTileSize; y++) {
for (int x = 0; x < paddedTileSize; x++) {
int n = y * paddedTileSize + x;
if (counts[n] != 0)
finalIntensity[n] = finalIntensity[n] / counts[n];
}
}
free(counts);
The finalIntensity should now contain your averages.
If you prefer, and the rest of the code makes it possible, you can skip the last loop and instead do the division when using the final intensity values. Just change any subsequent finalIntensity[n] to counts[n] == 0 ? finalIntensity[n] : finalIntensity[n] / counts[n].
I may have just solved the same issue for the java version.
My problem was having a custom gradient with 12 different values.
But my actual weighted data does not necessarily contain all intensity values from 1 to 12.
The problem is, the highest intensity value gets mapped to the highest color.
Also 10 datapoints with intensity 1 that are close by will get the same color as a single point with intensity 12.
So the function where the tile gets created is a good starting point:
Java:
public Tile getTile(int x, int y, int zoom) {
// ...
// Quantize points
int dim = TILE_DIM + mRadius * 2;
double[][] intensity = new double[dim][dim];
int[][] count = new int[dim][dim];
for (WeightedLatLng w : points) {
Point p = w.getPoint();
int bucketX = (int) ((p.x - minX) / bucketWidth);
int bucketY = (int) ((p.y - minY) / bucketWidth);
intensity[bucketX][bucketY] += w.getIntensity();
count[bucketX][bucketY]++;
}
// Quantize wraparound points (taking xOffset into account)
for (WeightedLatLng w : wrappedPoints) {
Point p = w.getPoint();
int bucketX = (int) ((p.x + xOffset - minX) / bucketWidth);
int bucketY = (int) ((p.y - minY) / bucketWidth);
intensity[bucketX][bucketY] += w.getIntensity();
count[bucketX][bucketY]++;
}
for(int bx = 0; bx < dim; bx++)
for (int by = 0; by < dim; by++)
if (count[bx][by] != 0)
intensity[bx][by] /= count[bx][by];
//...
I added a counter and count every addition to the intensities, after that I go through every intensity and calculate the average.
For C:
- (UIImage *)tileForX:(NSUInteger)x y:(NSUInteger)y zoom:(NSUInteger)zoom {
//...
// Quantize points.
int paddedTileSize = kGMUTileSize + 2 * (int)data->_radius;
float *intensity = calloc(paddedTileSize * paddedTileSize, sizeof(float));
int *count = calloc(paddedTileSize * paddedTileSize, sizeof(int));
for (GMUWeightedLatLng *item in points) {
GQTPoint p = [item point];
int x = (int)((p.x - minX) / bucketWidth);
// Flip y axis as world space goes south to north, but tile content goes north to south.
int y = (int)((maxY - p.y) / bucketWidth);
// If the point is just on the edge of the query area, the bucketing could put it outside
// bounds.
if (x >= paddedTileSize) x = paddedTileSize - 1;
if (y >= paddedTileSize) y = paddedTileSize - 1;
intensity[y * paddedTileSize + x] += item.intensity;
count[y * paddedTileSize + x] ++;
}
for (GMUWeightedLatLng *item in wrappedPoints) {
GQTPoint p = [item point];
int x = (int)((p.x + wrappedPointsOffset - minX) / bucketWidth);
// Flip y axis as world space goes south to north, but tile content goes north to south.
int y = (int)((maxY - p.y) / bucketWidth);
// If the point is just on the edge of the query area, the bucketing could put it outside
// bounds.
if (x >= paddedTileSize) x = paddedTileSize - 1;
if (y >= paddedTileSize) y = paddedTileSize - 1;
// For wrapped points, additional shifting risks bucketing slipping just outside due to
// numerical instability.
if (x < 0) x = 0;
intensity[y * paddedTileSize + x] += item.intensity;
count[y * paddedTileSize + x] ++;
}
for(int i=0; i < paddedTileSize * paddedTileSize; i++)
if (count[i] != 0)
intensity[i] /= count[i];
Next is the convolving.
What I did there, is to make sure that the calculated value does not go over the maximum in my data.
Java:
// Convolve it ("smoothen" it out)
double[][] convolved = convolve(intensity, mKernel, mMaxAverage);
// the mMaxAverage gets set here:
public void setWeightedData(Collection<WeightedLatLng> data) {
// ...
// Add points to quad tree
for (WeightedLatLng l : mData) {
mTree.add(l);
mMaxAverage = Math.max(l.getIntensity(), mMaxAverage);
}
// ...
// And finally the convolve method:
static double[][] convolve(double[][] grid, double[] kernel, double max) {
// ...
intermediate[x2][y] += val * kernel[x2 - (x - radius)];
if (intermediate[x2][y] > max) intermediate[x2][y] = max;
// ...
outputGrid[x - radius][y2 - radius] += val * kernel[y2 - (y - radius)];
if (outputGrid[x - radius][y2 - radius] > max ) outputGrid[x - radius][y2 - radius] = max;
For C:
// To get the maximum average you could do that here:
- (void)setWeightedData:(NSArray<GMUWeightedLatLng *> *)weightedData {
_weightedData = [weightedData copy];
for (GMUWeightedLatLng *dataPoint in _weightedData)
_maxAverage = Math.max(dataPoint.intensity, _maxAverage)
// ...
// And then simply in the convolve section
intermediate[y * paddedTileSize + x2] += scaledKernel;
if (intermediate[y * paddedTileSize + x2] > _maxAverage)
intermediate[y * paddedTileSize + x2] = _maxAverage;
// ...
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] += scaledKernel;
if (finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] > _maxAverage)
finalIntensity[(y2 - lowerLimit) * kGMUTileSize + x - lowerLimit] = _maxAverage;
And finally the coloring
Java:
// The maximum intensity is simply the size of my gradient colors array (or the starting points)
Bitmap bitmap = colorize(convolved, mColorMap, mGradient.mStartPoints.length);
For C:
// Generate coloring
// ...
float max = [data->_maxIntensities[zoom] floatValue];
max = _gradient.startPoints.count;
I did this in Java and it worked for me, not sure about the C-code though.
You have to play around with the radius and you could even edit the kernel. Because I found that when I have a lot of homogeneous data (i.e. little variation in the intensities, or a lot of data in general) the heat map will degenerate to a one-colored overlay, because the gradient on the edges will get smaller and smaller.
But hope this helps anyway.
// Erik

iOS 9+ - Swift 2.1 -> maximum value

I have this code:
var w = 0
var h = 0
for i in 1...am
{
if w > Int(screenSize.width)
{
w = 0
h += CHeight
}
//some other code
w += CWidth
so the value W is a part of the screen width, and it can be not "perfectly" equal to the screen width while adding them together.
the IF doing it's job only when the part value(W) larger then screen width. but how to make it so the IF will work when value W will be just before the end of screen width (doesn't go over it)?
if w > Int(screenSize.width) - CWidth
{
w = 0
h += CHeight
}

CGRectIntersectsRect for multiple CGRect

I have 8 UIImageView, which have to be placed randomly. I generate a random x,y pos for each imageView, then I need to check if any of the imageViews are intersecting. If they are intersecting, it goes back to calculating random x,y pos again(do..while loop). Now the only method I know of is CGRectIntersectsRect, which can only compare 2 CGRect. Is there a way I can check if all those imageViews intersect at once (inside the while condition)?
Here's what I already worked out for 3 images-
do {
xpos1 = 60 + arc4random() % (960 - 60 + 1);
ypos1 = 147 + arc4random() % (577 - 147 + 1);
xpos2 = 60 + arc4random() % (960 - 60 + 1);
ypos2 = 147 + arc4random() % (577 - 147 + 1);
xpos3 = 60 + arc4random() % (960 - 60 + 1);
ypos3 = 147 + arc4random() % (577 - 147 + 1);
} while (CGRectIntersectsRect(CGRectMake(xpos1, ypos1,120, 120), CGRectMake(xpos2, ypos2,120, 120)) || CGRectIntersectsRect(CGRectMake(xpos2, ypos2,120,120), CGRectMake(xpos3, ypos3, 120, 120)) || CGRectIntersectsRect(CGRectMake(xpos1, ypos1,120,120), CGRectMake(xpos3, ypos3, 120, 120)) );
image1.center=CGPointMake(xpos1, ypos1);
image2.center=CGPointMake(xpos2, ypos2);
image3.center=CGPointMake(xpos3, ypos3);
A simple algorithm would be to start with one rectangle, and then iteratively find new rectangles
that do not intersect with any of the previous ones:
int numRects = 8;
CGFloat xmin = 60, xmax = 960, ymin = 147, ymax = 577;
CGFloat width = 120, height = 120;
CGRect rects[numRects];
for (int i = 0; i < numRects; i++) {
bool intersects;
do {
// Create random rect:
CGFloat x = xmin + arc4random_uniform(xmax - xmin + 1);
CGFloat y = ymin + arc4random_uniform(ymax - ymin + 1);
rects[i] = CGRectMake(x, y, width, height);
// Check if it intersects with one of the previous rects:
intersects = false;
for (int j = 0; j < i; j++) {
if (CGRectIntersectsRect(rects[i], rects[j])) {
intersects = true;
break;
}
}
// repeat until new rect does not intersect with previous rects:
} while (intersects);
}
This should answer your question ("how to check for intersection with multiple rectangles"),
but note that this method is not perfect. If the rectangles would fill "much" of the
available space and the first rectangles are placed "badly" then the algorithm might not
terminate because it cannot find an admissible rectangle at some point.
I don't think that can happen with the dimensions used in your case, but you might keep that
in mind. A possible solution could be to count the number of tries that were made, and if
it takes too long than start over from the beginning.
Also, if you have to create many rectangles then the inner loop (that checks for the
intersection) can be improved by sorting the rectangles, so that less comparisons have to
be made.
Say you have generated point
CGFloat x = (CGFloat) (arc4random() % (int) self.view.bounds.size.width);
CGFloat y = (CGFloat) (arc4random() % (int) self.view.bounds.size.height);
CGPoint point=CGPointMake(x, y);
while ([self checkPointExist:point]) {
x = (CGFloat) (arc4random() % (int) self.view.bounds.size.width);
y = (CGFloat) (arc4random() % (int) self.view.bounds.size.height);
point=CGPointMake(x, y);
}
-(BOOL)checkPointExist:(CGPoint)point{
for(UIView *aView in [self.view subviews])
{
if(CGRectContainsPoint(aView.frame, point))
{
return TRUE;// There is already imageview. generate another point
}
}
return FALSE;
}

Get and Set Pixel Gray scale image Using Emgu CV

I am trying to get and set pixels of a gray scale image by using emgu Cv with C#.
If I use a large image size this error message occurs: "Index was outside the bounds of the array."
If I use an image 200x200 or less then there is no error but I don't understand why.
Following is my code:
Image<Gray , byte> grayImage;
--------------------------------------------------------------------
for (int v = 0; v < grayImage.Height; v++)
{
for (int u = 0; u < grayImage.Width; u++)
{
byte a = grayImage.Data[u , v , 0]; //Get Pixel Color | fast way
byte b = (byte)(myHist[a] * (K - 1) / M);
grayImage.Data[u , v , 0] = b; //Set Pixel Color | fast way
}
}
--------------------------------------------------------------------
http://i306.photobucket.com/albums/nn262/neji1909/9-6-25565-10-39.png
Please help me and sorry I am not good at English.
you are not indexing by (x,y) but by (row, col) - inverted. When you used 200x200 image it was the same whether you used width or height.
you could do that by using pointers (much faster) because if you are using indexing EmguCV internally uses calls to opencv for an every pixel.
so:
byte* ptr = (byte*)image.MIplImage.imageData;
int stride = image.MIplImage.widthStep;
int width = image.Width;
int height = image.Height;
for(int j = 0; j < height; j++)
{
for(int i = 0; i < width; i++)
{
ptr[i] = (byte)(myHist[a] * (K - 1) / M);
}
ptr += stride;
}
That's because the x and y are inverted in the Data array. You should change your code this way (invert u and v):
for (int v = 0; v < grayImage.Height; v++)
{
for (int u = 0; u < grayImage.Width; u++)
{
byte a = grayImage.Data[v , u , 0]; //Get Pixel Color | fast way
byte b = (byte)(myHist[a] * (K - 1) / M);
grayImage.Data[v , u , 0] = b; //Set Pixel Color | fast way
}
}
See also Iterate over pixels of an image with emgu cv

Per Pixel collision when animate sprites

This is what I have for detecting collision.
public static bool IntersectPixels(Rectangle rectangleA, Color[] dataA, Rectangle rectangleB, Color[] dataB)
{
int top = Math.Max(rectangleA.Top, rectangleB.Top);
int bottom = Math.Min(rectangleA.Bottom, rectangleB.Bottom);
int left = Math.Max(rectangleA.Left, rectangleB.Left);
int right = Math.Min(rectangleA.Right, rectangleB.Right);
for (int y = top; y < bottom; y++)
{
for (int x = left; x < right; x++)
{
Color colorA = dataA[(x - rectangleA.Left) + (y - rectangleA.Top) * rectangleA.Width];
Color colorB = dataB[(x - rectangleB.Left) + (y - rectangleB.Top) * rectangleB.Width];
if (colorA.A != 0 && colorB.A != 0)
{
return true;
}
}
}
return false;
}:
It work fine until I want to animate stuff. So I have a texture sprite that have about 12 frame. what I need to do is get the color data array of each frame. This is how I get the color data array:
Color[] playerColorArray = new Color[playerColorArray.X * playerColorArray.Y];
PlayerTexture.GetData(playerColorArray);
CData = playerColorArray;
Now my guess is that i have to update the textureData everytime the frame changes
Is there a way to get the the color data from each frame only?
You can get an array of the complete sprite sheet texture and only use the current frame.
Let's say you have a sprite sheet and stride is the offset of a pixel to the pixel below it. This can be the sprite sheet's width. Furthermore, you have the position x0, y0 of the first pixel of the current frame. Then you just have to modify the index calculation:
int posXInFrame = (x - rectangleA.Left);
int posYInFrame = (y - rectangleA.Top);
Color colorA = dataA[(posXInFrame + x0) + (posYInFrame + y0) * stride];
Probably, you have calculated x0 and y0 somewhere else and can pass those values to the function.

Resources