I am working in iOS and have been presented with this problem. I will be receiving a relatively large set of data (a 500x500 or greater C-style array). I need to construct what is essentially an X/Y plot from this data, where each datapoint in the 500x500 grid corresponds to a color based on it's value. The thing is that this changes over time, making some animation as it were, so the calculations have to be fast in order to change when a new set of data comes in.
So basically, for every point in the array, I need to figure out which color it should map to, then figure out the square to draw on the grid to represent the data. If my grid were 768x768 pixels, but I have a 500x500 dataset, then each datapoint would represent about a 1.5x1.5 rectangle (that's rounded, but I hope you get the idea).
I tried this by creating a new view class and overriding drawRect. However, that met with horrible performance with anything much over a 20x20 dataset.
I have seen some suggestions about writing to image buffers, but I have not been able to find any examples of doing that (I'm pretty new to iOS). Do you have any suggestions or could you point me to any resources which could help?
Thank you for your time,
Darryl
Here's some code that you can put in a method that will generate and return a UIImage in an offscreen context. To improve performance, try to come up with ways to minimize the number of iterations, such as making your "pixels" bigger, or only drawing a portion that changes.
UIGraphicsBeginImageContext(size); // Use your own image size here
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current
// (need to do this manually because we are not drawing in a UIView)
//
UIGraphicsPushContext(context);
for (CGFloat x = 0.0; x<size.width; x+=1.0) {
for (CGFloat y=0.0; y< size.height; y+=1.0) {
// Set your color here
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 1.0));
CGContextFillRect(context, CGRectMake(x, y, 1.0, 1.0));
}
}
// pop context
//
UIGraphicsPopContext();
// get a UIImage from the image context- enjoy!!!
//
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
[outputImage retain];
// clean up drawing environment
//
UIGraphicsEndImageContext();
return [outputImage autorelease];
If you want to be fast, you shouldn't call explicit drawing functions for every pixel. You can allocate memory and use CGBitmapContextCreate to build an image with the data in that memory - which is basically a bytes array. Do your calculations and write the color information (a-r-g-b) directly into that buffer. You have to do the maths on your own, though, regarding the sub-pixel accuracy (blending).
I don't have an example at hand, but searching for CGBitmapContextCreate should head you the right direction.
Related
I have an MKMapView where I want to grey out parts of the map. More specifically, I want to have some circles and rectangles which are displayed normally and the rest of the map has a semi-transparent grey layer. Something like this:
For that, I think that I should subclass MKOverlay and MKOverlayRenderer. As Apple suggests, in my MKOverlayRenderer subclass, I should override the drawMapRect:zoomScale:inContext: method, and draw my stuff using Core Graphics. My question is how can I draw the following with Core Graphics?
I have spent some hours looking at masking and clipping using Core Graphics, but I haven't found anything similar to this. The QuartzDemo has some examples of clipping and masking. I guess clipping with either the even-odd or nonzero winding number rules won't work for me, as the rectangles and circles are dynamic. I think I have to create a mask somehow, but I can't figure out how. The QuartzDemo creates a mask out of an image. How could I create a mask using rectangles and circles? Is there any other way I could approach this?
Thank you
You should be able to setup a context with transparency, draw the skinny rectangles and circles without stroking them, draw a rectangle border around the whole context, and then fill the shape with the darker color. You'll need to look into fill order rules to make sure that the larger space is what is filled rather than the smaller joined shapes.
I guess clipping with either the even-odd or nonzero winding number rules won't work for me, as the rectangles and circles are dynamic.
This shouldn't matter.
Well, I worked it out myself. I have to create my own mask. This is done by drawing the "hole" shapes, outputting this to an ImageRef, and inverting this ImageRef to output the mask. Here is a code snippet that could work if you add it in the QuartzDemo project->QuartzClipping.m->QuartzClippingView class, at the end of the drawInContext method.
-(void)drawInContext:(CGContextRef)context {
// ...
CGContextSaveGState(context);
// dimension of the square mask
int dimension = 20;
// create mask
UIGraphicsBeginImageContextWithOptions(CGSizeMake(dimension, dimension), NO, 0.0f);
CGContextRef newContext = UIGraphicsGetCurrentContext();
// draw overlapping circle holes
CGContextFillEllipseInRect(newContext, CGRectMake(0, 0, 10, 10));
CGContextFillEllipseInRect(newContext, CGRectMake(0, 7, 10, 10));
// draw mask
CGImageRef mask = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
UIGraphicsEndImageContext();
// the inverted mask is what we need
CGImageRef invertedMask = [self invertMask:mask dimension:dimension];
CGRect rectToDraw = CGRectMake(210.0, height - 290.0, 90.0, 90.0);
// everything drawn in rectToDraw after this will have two holes
CGContextClipToMask(context, rectToDraw, invertedMask);
// drawing a red rectangle for this demo
CGContextFillRect(context, rectToDraw);
CGContextRestoreGState(context);
}
// taken from the QuartzMaskingView below
- (CGImageRef)invertMask:(CGImageRef)originalMask dimension:(int)dimension{
// To show the difference with an image mask, we take the above image and process it to extract
// the alpha channel as a mask.
// Allocate data
NSMutableData *data = [NSMutableData dataWithLength:dimension * dimension * 1];
// Create a bitmap context
CGContextRef context = CGBitmapContextCreate([data mutableBytes], dimension, dimension, 8, dimension, NULL, (CGBitmapInfo)kCGImageAlphaOnly);
// Set the blend mode to copy to avoid any alteration of the source data
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the image to extract the alpha channel
CGContextDrawImage(context, CGRectMake(0.0, 0.0, dimension, dimension), originalMask);
// Now the alpha channel has been copied into our NSData object above, so discard the context and lets make an image mask.
CGContextRelease(context);
// Create a data provider for our data object (NSMutableData is tollfree bridged to CFMutableDataRef, which is compatible with CFDataRef)
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFMutableDataRef)data);
// Create our new mask image with the same size as the original image
return CGImageMaskCreate(dimension, dimension, 8, 8, dimension, dataProvider, NULL, YES);
}
Any easier/more efficient solution is welcome :)
I setting up an OpenGL iOS project. I´m read a lot about the projection matrix an GluUnproject. I want to draw on an 3D model dynamically. Therefore I need the corresponding points from my window to the 3D object.
From GluUnproject I get a ray through my 3D scene. After that I can find collision point with iterative algorithms (raytracing)...
Now the problems:
How do I get the corresponding texture?
How do I get the corresponding vertices/pixels?
How can I write on that perspective texture/pixel?
How do I get the corresponding texture?
Getting the texture should be easy enough if you are using an object based approach to the objects in your scene. Just store a reference to the texture file name in the class and then iterate through your scene objects in your raycasting method, grabbing the texture name when you get a match.
How do I get the corresponding vertices/pixels?
Again this should be easy if you have used an object based approach for your object drawing (i.e an instantiation of a custom object class for each object in the scene). Assuming all your scene objects are in an NSMutableArray, you can just iterate through the array until you find a match on the raycasted object.
How can I write on that perspective texture/pixel?
If you are looking at writing text on a new texture one way of doing this is to use the layer of a UILabel as a texture (e.g see below), but if you are looking at drawing on an existing texture this is much more difficult (and to be honest to be avoided).
UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, width, height)];
label.text = text;
lLabel.font = [UIFont fontWithName:#"Helvetica" size:12];
UIGraphicsBeginImageContext(standLabel.bounds.size);
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, height);
CGContextRef myContext = UIGraphicsGetCurrentContext();
// flip transform used to transform the coordinate system from origin for OpenGL.
CGAffineTransform flipTransform = CGAffineTransformConcat(CGAffineTransformMakeTranslation(0.f, height),
CGAffineTransformMakeScale(1.f, -1.f));
CGContextConcatCTM(myContext, flipTransform);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
[standLabel.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *layerImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I'm trying to zoom and translate an image on the screen.
here's my drawRect:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetShouldAntialias(context, NO);
CGContextScaleCTM (context, senderScale, senderScale);
[self.image drawAtPoint:CGPointMake(imgposx, imgposy)];
CGContextRestoreGState(context);
}
When senderScale is 1.0, moving the image (imgposx/imgposy) is very smooth. But if senderScale has any other value, performance takes a big hit and the image stutters when I move it.
The image I am drawing is a UIImageobject. I create it with
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
and draw a simple UIBezierPath(stroke):
self.image = UIGraphicsGetImageFromCurrentImageContext();
Am I doing something wrong? Turning off the anti-aliasing did not improve things much.
Edit:
I tried this:
rectImage = CGRectMake(0, 0, self.frame.size.width * senderScale, self.frame.size.height * senderScale);
[image drawInRect:rectImage];
but it was just as slow as the other method.
If you want this to perform well, you should let the GPU do the heavy lifting by using CoreAnimation instead of drawing the image in your -drawRect: method. Try creating a view and doing:
myView.layer.contents = self.image.CGImage;
Then zoom and translate it by manipulating the UIView relative to its superview. If you draw the image in -drawRect: you're making it do the hard work of blitting the image for every frame. Doing it via CoreAnimation only blits once, and then subsequently lets the GPU zoom and translate the layer.
I need to draw a line chart from values that come to me every half a seconds. I've come up with my custom CALayer for this graph which stores all the previous lines and every two seconds redraws all previous lines and adds one new line. I find this solution non-optimal because there's only need to draw one additional line to the layer, no reason to redraw potentially thousands of previous lines.
What do you think would be the best solution in this case?
Use your own NSBitmapContext or UIImage as a backing store. Whenever new data comes in draw to this context and set your layer's contents property to the context's image.
I am looking at an identical implementation. Graph updates every 500 ms. Similarly I felt uncomfortable drawing the entire graph each iteration. I implemented a solution 'similar' to what Nikolai Ruhe proposed as follows:
First some declarations:
#define TIME_INCREMENT 10
#property (nonatomic) UIImage *lastSnapshotOfPlot;
and then the drawLayer:inContext method of my CALayer delegate
- (void) drawLayer:( CALayer*)layer inContext:(CGContextRef)ctx
{
// Restore the image of the layer from the last time through, if it exists
if( self.lastSnapshotOfPlot )
{
// For some reason the image is being redrawn upside down!
// This block of code adjusts the context to correct it.
CGContextSaveGState(ctx);
CGContextTranslateCTM(ctx, 0, layer.bounds.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
// Now we can redraw the image right side up but shifted over a little bit
// to allow space for the new data
CGRect r = CGRectMake( -TIME_INCREMENT, 0, layer.bounds.size.width, layer.bounds.size.height );
CGContextDrawImage(ctx, r, self.lastSnapshotOfPlot.CGImage );
// And finally put the context back the way it was
CGContextRestoreGState(ctx);
}
CGContextStrokePath(ctx);
CGContextSetLineWidth(ctx, 2.0);
CGContextSetStrokeColorWithColor(ctx, [UIColor blueColor].CGColor );
CGContextBeginPath( ctx );
// This next section is where I draw the line segment on the extreme right end
// which matches up with the stored graph on the image. This part of the code
// is application specific and I have only left it here for
// conceptual reference. Basically I draw a tiny line segment
// from the last value to the new value at the extreme right end of the graph.
CGFloat ppy = layer.bounds.size.height - _lastValue / _displayRange * layer.bounds.size.height;
CGFloat cpy = layer.bounds.size.height - self.sensorData.currentvalue / _displayRange * layer.bounds.size.height;
CGContextMoveToPoint(ctx,layer.bounds.size.width - TIME_INCREMENT, ppy ); // Move to the previous point
CGContextAddLineToPoint(ctx, layer.bounds.size.width, cpy ); // Draw to the latest point
CGContextStrokePath(ctx);
// Finally save the entire current layer to an image. This will include our latest
// drawn line segment
UIGraphicsBeginImageContext(layer.bounds.size);
[layer renderInContext: UIGraphicsGetCurrentContext()];
self.lastSnapshotOfPlot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Is this the most efficient way?
I have not been programming in ObjectiveC long enough to know so all suggestions/improvements welcome.
Unsatisfied with my previous results, I have been asked to create a freehand drawing view that will not blur when zoomed. The only way I can imagine this is possible is to use a CATiledLayer because otherwise it is just too slow when drawing a line when zoomed. Currently, I have it set up so that it will redraw every line every time, but I want to know if I can cache the results of the previous lines (not as pixels because they need to scale well) in a context or something.
I thought about CGBitmapContext, but would that mean I would need to tear down and set up a new context after every zoom? The problem is that on a retina display, the line drawing is too slow (on iPad 2 it is so-so), especially when drawing while zoomed. There is an app in the App Store called GoodNotes which beautifully demonstrates that this is possible, and it is possible to do it smoothly, but I can't understand how they are doing it. Here is my code so far (result of most of today):
- (void)drawRect:(CGRect)rect
{
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(c, mLineWidth);
CGContextSetAllowsAntialiasing(c, true);
CGContextSetShouldAntialias(c, true);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextSetLineJoin(c, kCGLineJoinRound);
//Protect the local variables against the multithreaded nature of CATiledLayer
[mLock lock];
NSArray *pathsCopy = [mStrokes copy];
for(UIBezierPath *path in pathsCopy) //**Would like to cache these**
{
CGContextAddPath(c, path.CGPath);
CGContextStrokePath(c);
}
if(mCurPath)
{
CGContextAddPath(c, mCurPath.CGPath);
CGContextStrokePath(c);
}
CGRect pathBounds = mCurPath.bounds;
if(pathBounds.size.width > 32 || pathBounds.size.height > 32)
{
[mStrokes addObject:mCurPath];
mCurPath = [[UIBezierPath alloc] init];
}
[mLock unlock];
}
Profiling shows the hottest function by far is GCSFillDRAM8by1
First, as the path stroking is the most expensive operation, you shouldn't lock around it as this prevent you from drawing tiles concurrently on different cores.
Secondly, I think you could avoid to call CGContextStrokePath several times by adding all the paths in the context and stroking them altogether.
[mLock lock];
for ( UIBezierPath *path in mStrokes ) {
CGContextAddPath(c, path.CGPath);
}
if ( mCurPath ) {
CGContextAddPath(c, mCurPath.CGPath);
}
CGRect pathBounds = mCurPath.bounds;
if ( pathBounds.size.width > 32 || pathBounds.size.height > 32 )
{
[mStrokes addObject:mCurPath];
mCurPath = [[UIBezierPath alloc] init];
}
[mLock unlock];
CGContextStrokePath(c);
The CGContextRef is just a canvas in which the drawing operations occur. You cannot cache it but you may the create a CGImageRef with a flattened bitmap image of your paths and reuse that image. This won't help with zooming (as you'd need to recreate the image when the level of detail changes) but can be useful to improve the performance when the user is drawing a really long path.
There is a really interesting WWDC 2012 Session Video on that subject: Optimizing 2D Graphics and Animation Performance.
The bottleneck was actually the way I was using CATiledLayer. I guess it is too much to update with freehand info. I set it up with levels of detail as I saw in the docs and tutorials online, but in the end I didn't need that much. I just hooked up the scrollview delegate, cleared the contents when it was done zooming and changed the contentScale of the layer to match the scroll view. The result was beautiful (it disappears and fades back in, but that can't be helped).