In my view I have few subviews. This is UIImageView.
Each UIImageView contain image with alpha chanel.
This is an image:
I use this method below for detecting my touch in location view:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
CGPoint touchLocation = [touch locationInView:self.view];
NSArray *views = [self.view subviews];
for (UIView *v in views) {
if([v isKindOfClass:[Piece class]]){
if (CGRectContainsPoint(v.frame, touchLocation) && ((Piece *)v).snapped == FALSE) {
UITouch* touchInPiece = [touches anyObject];
CGPoint point = [touchInPiece locationInView:(Piece *)v];
BOOL solidColor = [self verifyAlphaPixelImage:(Piece *)v atX:point.x atY:point.y];
if (solidColor) {
dragging = YES;
oldX = touchLocation.x;
oldY = touchLocation.y;
piece = (Piece *)v;
[self.view bringSubviewToFront:piece];
break;
}
}
}
}
}
and this method for verify alpha pixel
- (BOOL)verifyAlphaPixelImage:(Piece *)image atX:(int)x atY:(int)y{
CGImageRef imageRef = [image.image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
// CGFloat red = (rawData[byteIndex] ) ;
// CGFloat green = (rawData[byteIndex + 1] ) ;
// CGFloat blue = (rawData[byteIndex + 2] ) ;
CGFloat alpha = (rawData[byteIndex + 3] ) ;
NSLog(#"%f", alpha);
free(rawData);
if(alpha==255.0) return NO;
else return YES;
}
If alpha pixel founded I need to touch other UIImageView below UIImageView that I have tapp's before.
For example if I have stacked UIImageView and I touch on first:
now I should verify first UIImageView
if I touched on alpha pixel - > I should move to next UIImageView with this coord and verify it for alpha pixel too.
If second 3th, 4th or 5th have not alpha pixel with my coord I should select this UIImageView.
For now I verify my pixel - but my method return wrong value.
At the discussion for How to get pixel data from a UIImage (Cocoa
Touch) or CGImage (Core Graphics)?, there's a correction to the
routine you're using. Use calloc instead of malloc, or you'll start
getting arbitrary results.
If your Piece class ever scales images, you will need to scale its x and y inputs by the scale the image is being displayed at.
The touchesBegan method oddly looks at some other view it contains, not itself. Is there a reason for this? What class is touchesBegan in?
Subviews are stored in drawing order from back to front, so when you iterate through subviews, you'll look at (and potentially select) the rearmost object first. Iterate from the last element of subviews towards the front instead.
Even after this works, it will be very inefficient, rendering every Piece image every time the user taps. You'll ultimately want to cache the pixel data for every Piece for rapid lookup.
Related
I'm using this to generate a curved text:
- (UIImage*)createCircularText:(NSString*)text withSize:(CGSize)size andCenter:(CGPoint)center
{
UIFont *font = [UIFont fontWithName:#"HelveticaNeue-Light" size:15];
// Start drawing
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Retrieve the center and set a radius
CGFloat r = center.x / 3;
// Start by adjusting the context origin
// This affects all subsequent operations
CGContextTranslateCTM(context, center.x, center.y);
// Calculate the full extent
CGFloat fullSize = 0;
for (int i = 0; i < [text length]; i++)
{
NSString *letter = [text substringWithRange:NSMakeRange(i, 1)];
CGSize letterSize = [letter sizeWithAttributes:#{NSFontAttributeName:font}];
fullSize += letterSize.width;
}
// Initialize the consumed space
CGFloat consumedSize = 0.0f;
// Iterate through the alphabet
for (int i = 0; i < [text length]; i++)
{
// Retrieve the letter and measure its display size
NSString *letter = [text substringWithRange:NSMakeRange(i, 1)];
CGSize letterSize = [letter sizeWithAttributes:#{NSFontAttributeName:font}];
// Calculate the current angular offset
//CGFloat theta = i * (2 * M_PI / ((float)[text length] * 3));
// Move the pointer forward, calculating the new percentage of travel along the path
consumedSize += letterSize.width / 2.0f;
CGFloat percent = consumedSize / fullSize;
CGFloat theta = (percent * 2 * M_PI) / ((float)[text length] / 4);
consumedSize += letterSize.width / 2.0f;
// Encapsulate each stage of the drawing
CGContextSaveGState(context);
// Rotate the context
CGContextRotateCTM(context, theta);
// Translate up to the edge of the radius and move left by
// half the letter width. The height translation is negative
// as this drawing sequence uses the UIKit coordinate system.
// Transformations that move up go to lower y values.
CGContextTranslateCTM(context, -letterSize.width / 2, -r);
// Draw the letter and pop the transform state
[letter drawAtPoint:CGPointMake(0, 0) withAttributes:#{NSFontAttributeName:font}];
CGContextRestoreGState(context);
}
// Retrieve and return the image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
and i get this back:
The problem is that the text starts at 0° but I actually want it to begin more at the left, so that the center of the string is at 0°. How to accomplish this?
Two options should work:
After drawing all of the text, rotate the context half of the use angle and get the image from the context at that point.
Make two passes. The first simply calculates the required angle to draw the text. The second pass does what you do now. Except in the 2nd pass, subtract half of the required total angle from each letter's angle.
I am trying to pick the color from a horizontal bar with imageViews by reading their pixel values which is sort of horizontal colour picker
but I am unable to retrieve the proper colour there is some problem in sending the points of the knob because of the hierarchy
- View
- UIImageview (Red)
- UIImageview (Green)
- UIimageview (Blue)
- etc…
Below is the code of it
storyboard:I am trying to pick the color from a horizontal bar with imageViews by reading their pixel values which is sort of horizontal colour picker
but I am unable to retrieve the proper colour there is some problem in sending the points of the knob because of the hierarchy
- View
- UIImageview (Red)
- UIImageview (Green)
- UIimageview (Blue)
- etc…
Below is the code of it
storyboard:
#import "EffViewController.h"
#import "TouchMoveGestureRecognizer.h"
#interface EffViewController ()
#end
#implementation EffViewController
#synthesize colorPicker,sliders,sliderValues,picker;
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
for (ColorPickerImageView *colorPickers in _colorPickerArray) {
colorPickers.pickedColorDelegate = self;
}
//colorPicker.pickedColorDelegate = self;
TouchMoveGestureRecognizer* gr = [[TouchMoveGestureRecognizer alloc] init];
[picker addGestureRecognizer: gr];
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(getTheColorPickerData:) name:#"getTheColorPickerData" object:nil];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void) pickedColor:(UIColor*)color atPoint:(CGPoint)point{
[picker setHidden:false];
self.view.backgroundColor = color;
// Convert point Positions from colorpickerview to main view
CGPoint pointInViewCoords = [self.view convertPoint:point fromView:colorPicker];
//[picker setCenter:pointInViewCoords];
CGColorRef cgcolor = [color CGColor];
int numComponents = CGColorGetNumberOfComponents(cgcolor);
if (numComponents == 4){
const CGFloat *components = CGColorGetComponents(cgcolor);
for(UISlider *slider in sliders)
[slider setValue: components[[slider tag]-1]*255.0f];
for(UITextField *text in sliderValues){
int intValue = components[[text tag]-1]*255;
[text setText:[NSString stringWithFormat:#"%d",intValue]];
}
}
}
- (void)getTheColorPickerData:(NSNotification*)notification
{
// 1
//UIEvent *event = notification.userInfo[#"events"];
NSSet *touches = notification.userInfo[#"touches"];
UITouch* t = [touches anyObject];
CGPoint Worldpoint = [t locationInView: self.colorPickerViewFrame];
//point.y = picker.frame.origin.y;
NSLog(#"World Points x:%f y:%f",Worldpoint.x,Worldpoint.y);
[picker setHidden:false];
CGPoint point = picker.frame.origin;//[[[event allTouches] anyObject] locationInView:self.view];
point.y = picker.frame.size.height/2;
UIControl *control = picker;
NSLog(#"Points x:%f y:%f",point.x,point.y);
NSLog(#"Color Picker Points x :%f y:%f w:%f h:%f",[colorPicker frame].origin.x,[colorPicker frame].origin.y,[colorPicker frame].size.width,[colorPicker frame].size.height);
for (ColorPickerImageView *colorPickers in _colorPickerArray) {
if (CGRectContainsPoint([colorPickers frame], point)) {
// control.center = point;
//[colorPicker touchesEnded:[event allTouches] withEvent:event];
[colorPicker imageSelectionEnded:point];
}
}
}
TouchMoveGesture
#import "TouchMoveGestureRecognizer.h"
#implementation TouchMoveGestureRecognizer
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch* t = [touches anyObject];
_ptOffset = [t locationInView: self.view];
// if(![self isInsideTheBoundary:_ptOffset]){
// return;
// }
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch* t = [touches anyObject];
CGPoint pt = [t locationInView: self.view.superview];
float ptYtemp;
// if(![self isInsideTheBoundary:pt]){
// return;
// }
if ((pt.x < 295.0f && pt.x > 25.0f) == NO) {
return;
}
pt.x -= _ptOffset.x;
ptYtemp = pt.y;
pt.y -= _ptOffset.y;
if ((pt.y < 26.0f && pt.y > 24.0f) == NO) {
pt.y=ptYtemp;
}
CGRect r = self.view.frame;
r.origin.x = pt.x;
self.view.frame = r;
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
_ptOffset = CGPointMake(0, 0);
[[NSNotificationCenter defaultCenter] postNotificationName:#"getTheColorPickerData"
object:self
userInfo:#{#"touches":touches,
#"events":event}];
}
-(BOOL) isInsideTheBoundary :(CGPoint) location {
BOOL isInsideBoundary = NO;
// NSLog(#"Location x:%f y:%f",location.x,location.y);
if (location.x < 320.0f && location.x > 0.0f) {
if (location.y < 31.0f && location.y > 26.0f) {
return YES;
}
}
return NO;
}
#end
ColorPickerImageView
#import "ColorPickerImageView.h"
#import "EffViewController.h"
#import <CoreGraphics/CoreGraphics.h>
#import <QuartzCore/CoreAnimation.h>
#implementation ColorPickerImageView
#synthesize lastColor;
#synthesize pickedColorDelegate;
- (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event {
if (self.hidden==YES) {
//color wheel is hidden, so don't handle this as a color wheel event.
[[self nextResponder] touchesEnded:touches withEvent:event];
return;
}
UITouch* touch = [touches anyObject];
CGPoint point = [touch locationInView:self]; //where image was tapped
}
-(void)imageSelectionEnded:(CGPoint)point {
if (self.hidden==YES) {
//color wheel is hidden, so don't handle this as a color wheel event.
// [[self nextResponder] touchesEnded:touches withEvent:event];
return;
}
self.lastColor = [self getPixelColorAtLocation:point];
NSLog(#"color %#",lastColor);
[pickedColorDelegate pickedColor:(UIColor*)self.lastColor atPoint:point];
}
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
/** Extra Added code for Resized Images ****/
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
/** ****************************************/
/** Extra Code Added for Resolution ***********/
CGFloat x = 1.0;
if ([self.image respondsToSelector:#selector(scale)]) x = self.image.scale;
/*********************************************/
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
// int offset = 4*((w*round(point.y))+round(point.x));
int offset = 4*((w*round(point.y))+round(point.x))*x; //Replacement for Resolution
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
#end
I want to draw shape of small land on view by taking latitude and longitude at the corner of land.
I have wrote following code. For now I took hard core values.
- (void)drawRect:(CGRect)rect {
CGSize screenSize = [UIScreen mainScreen].applicationFrame.size;
SCALE = MIN(screenSize.width, screenSize.height) / (2.0 * EARTH_RADIUS);
OFFSET = MIN(screenSize.width, screenSize.height) / 2.0;
CGPoint latLong1 = {18.626103, 73.805023};
CGPoint latLong2 = {18.626444, 73.804884};
CGPoint latLong3 = {18.626226, 73.804969};
CGPoint latLong4 = {18.626103, 73.805023};
NSMutableArray *points=[NSMutableArray arrayWithObjects:[NSValue valueWithCGPoint:[self convertLatLongCoord:latLong1]],[NSValue valueWithCGPoint:[self convertLatLongCoord:latLong2]], [NSValue valueWithCGPoint:[self convertLatLongCoord:latLong3]],[NSValue valueWithCGPoint:[self convertLatLongCoord:latLong4]],nil];
CGContextRef ctx = UIGraphicsGetCurrentContext();
for(int i=0;i<points.count;i++)
{
// CGPoint newCoord = [self convertLatLongCoord:latLong];
NSValue *val = [points objectAtIndex:i];
CGPoint newCoord = [val CGPointValue];
if(i == 0)
{
// move to the first point
CGContextMoveToPoint(ctx, newCoord.x, newCoord.y);
}
else
{
CGContextAddLineToPoint(ctx, newCoord.x, newCoord.y);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1);
CGContextSetStrokeColorWithColor(ctx, [[UIColor redColor] CGColor]);
}
}
CGContextStrokePath(ctx);
}
Below is method which converts lat long into x,y co-ordinates.
- (CGPoint)convertLatLongCoord:(CGPoint)latLong
{
CGFloat x = EARTH_RADIUS * cos(latLong.x) * cos(latLong.y) * SCALE + OFFSET;
CGFloat y = EARTH_RADIUS * cos(latLong.x) * sin(latLong.y) * SCALE + OFFSET;
return CGPointMake(x, y);
}
My problem is when I took small land(e.g house land) area lat long its shape is not visible on view after draw. How I can show maximise shape of land on view.
Thanks in advance.
I am suffering through EXC_BAD_ACCESS error in my iOS app. Everything works fine on simulator & iPhone 4S device BUT I get this error on iPad2 device. I have iOS 6.0 installed on both iPhone & iPad2 devices.
Here is the functionality: I have a Christmas Tree (UIImageView) on background image (UIImageView). I have lots of ornaments (UIImageViews) on top of Christmas tree and they all fall down when user shakes device. Now we can drag the ornaments from floor and can decorate them on the Tree. I noticed that it crashes ONLY when I leave/release the ornament when its touching the Tree partially (I mean when the ornament is touching edges of the tree and is still on the background UIImageView and I release/drop the ornament)
Error comes at CGFloat red = (rawData[byteIndex] * 1.0) / 255.0; of code inside -(NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count method. Below is the code...
I tried various methods to resolve it but no use. Would greatly appreciate if someone can guide me in correct direction.
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
//[Audio1 pause];
NSLog(#"touchesEnded:withEvent:");
UITouch *touch = [touches anyObject];
CGPoint currentPosition = [touch locationInView:self.view];
if (pageNum == 9) //ROCKEFELLER CHRISTMAS TREE
{
NSLog(#"touchesEnded:withEvent: if (pageNum == 9)");
float trans_y,drop_second;
UIImageView *temp_view;
// UITouch *touch=[touches anyObject];
CGPoint _location=currentPosition;
//for (int i=601; i<636; i++) {
for (int i=921; i<956; i++) {
if ([touch view] ==[self.view viewWithTag:i]) {
press_ornament=YES;
//invalidate the transform scale.
temp_view=(UIImageView *)[self.view viewWithTag:i];
temp_view.transform=CGAffineTransformIdentity;
//detect the position are inside tree, or outside
color_array= [self getRGBAsFromImage:tree.image atX:_location.x andY:_location.y count:1];
color = color_array[0];
//Call getRed:green:blue:alpha: and pass in pointers to floats to take the answer.
[color getRed: &red_color green: &green_color blue: &blue_color alpha: &alpha_value];
NSLog(#"red = %f. Green = %f. Blue = %f. Alpha = %f",red_color,green_color,blue_color,alpha_value);
// Using alpha, decide inside the tree inside the tree view domain
if ((alpha_value == 1) && (tree.frame.origin.x < _location.x) &&(_location.x <tree.frame.origin.x +tree.frame.size.width) && (tree.frame.origin.y < _location.y) && (_location.y< tree.frame.origin.y + tree.frame.size.height)) {
[touch view].center=CGPointMake(_location.x, _location.y);
}
//outside the tree fall the ornament
else{
if ([[UIDevice currentDevice]userInterfaceIdiom] == UIUserInterfaceIdiomPhone) {
trans_y=280-[touch view].frame.origin.y;
drop_second =trans_y/150;
temp_view=(UIImageView *)[self.view viewWithTag:i];
// Move the image
[self moveImage:temp_view duration:drop_second curve:UIViewAnimationCurveLinear x:temp_view.frame.origin.x y:280];
}
else{
trans_y=730-[touch view].frame.origin.y;
drop_second =trans_y/300;
temp_view=(UIImageView *)[self.view viewWithTag:i];
// Move the image
[self moveImage:temp_view duration:drop_second curve:UIViewAnimationCurveLinear x:temp_view.frame.origin.x y:730];
}
}
}
}
}
}
//This is the function that get point color.
-(NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
// NSUInteger width = CGImageGetWidth(imageRef);
// NSUInteger height = CGImageGetHeight(imageRef);
NSInteger width=tree.frame.size.width;
NSInteger height=tree.frame.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
I would suspect you do not get the xx,yy that you expect and therefore you come outside your allocated block. Try to NSLog or debug the xx,yy and see what byteindex you end up with.
or put in an assert to catch if you come outside range.
NSAssert( bytesPerRow * width * yy + xx * bytesPerPixel
< height * width * 4,
#"outside range!" );
I would also suggest that you make your function shorter and move the tmp variables to the block where they are used. It makes the code a bit easier to read.
What I want to do is move my finger across the screen (touchesMoved) and draw evenly spaced images (perhaps CGImageRefs) along the points generated by the touchesMoved. I can draw lines, but what I want to generate is something that looks like this (for this example I am using an image of an arrow but it could be any image, could be a picture of my dog :) ) The main thing is to get the images evenly spaced when drawing with a finger on an iPhone or iPad.
First of all HUGE props go out to Kendall. So, based on his answer, here is the code to take a UIImage, draw it on screen along a path (not a real pathRef, just a logical path created by the points) based on the distance between the touches and then rotate the image correctly based on the VECTOR of the current and previous points. I hope you like it:
First you need to load an image to be used as a CGImage over and over again:
NSString *imagePath = [[NSBundle mainBundle] pathForResource:#"arrow.png" ofType:nil];
UIImage *img = [UIImage imageWithContentsOfFile:imagePath];
image = CGImageRetain(img.CGImage);
make sure in your dealloc that you call
CGImageRelease(image);
then in touchesBegan, just store the starting point in a var that is scoped outside the method (declare it in your header like this :) in this case I am drawing into a UIView
#interface myView : UIView {
CGPoint lastPoint;
}
#end
then in touches Began:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
lastPoint = [touch locationInView:self];
}
and finally in touchesMoved, draw the bitmap to the screen and then when your distance has moved enough (in my case 73, since my image is 73 pixels x 73 pixels) draw that image to the screen, save the new image and set lastPoint equal to currentPoint
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
currentPoint = [touch locationInView:self];
double deltaX = lastPoint.x - currentPoint.x;
double deltaY = lastPoint.y - currentPoint.y;
double powX = pow(deltaX,2);
double powY = pow(deltaY,2);
double distance = sqrt(powX + powY);
if (distance >= 73){
lastPoint = currentPoint;
UIGraphicsBeginImageContext(self.frame.size);
[drawImage.image drawInRect:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)];
CGContextSaveGState(UIGraphicsGetCurrentContext());
float angle = atan2(deltaX, deltaY);
angle *= (M_PI / 180);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(currentPoint.x, currentPoint.y, 73, 73),[self CGImageRotatedByAngle:image angle:angle * -1]);
CGContextRestoreGState(UIGraphicsGetCurrentContext());
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
distance = 0;
}
}
- (CGImageRef)CGImageRotatedByAngle:(CGImageRef)imgRef angle:(CGFloat)angle
{
CGFloat angleInRadians = angle * (M_PI / 180);
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGRect imgRect = CGRectMake(0, 0, width, height);
CGAffineTransform transform = CGAffineTransformMakeRotation(angleInRadians);
CGRect rotatedRect = CGRectApplyAffineTransform(imgRect, transform);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bmContext = CGBitmapContextCreate(NULL,
rotatedRect.size.width,
rotatedRect.size.height,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextSetAllowsAntialiasing(bmContext, FALSE);
CGContextSetInterpolationQuality(bmContext, kCGInterpolationNone);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(bmContext,
+(rotatedRect.size.width/2),
+(rotatedRect.size.height/2));
CGContextRotateCTM(bmContext, angleInRadians);
CGContextTranslateCTM(bmContext,
-(rotatedRect.size.width/2),
-(rotatedRect.size.height/2));
CGContextDrawImage(bmContext, CGRectMake(0, 0,
rotatedRect.size.width,
rotatedRect.size.height),
imgRef);
CGImageRef rotatedImage = CGBitmapContextCreateImage(bmContext);
CFRelease(bmContext);
[(id)rotatedImage autorelease];
return rotatedImage;
}
this will create an image that looks like this :
Going to add the following (with some changes to the above code in order to try and fill in the voids where touchesMoved is missing some points when you move fast:
CGPoint point1 = CGPointMake(100, 200);
CGPoint point2 = CGPointMake(300, 100);
double deltaX = point2.x - point1.x;
double deltaY = point2.y - point1.y;
double powX = pow(deltaX,2);
double powY = pow(deltaY,2);
double distance = sqrt(powX + powY);
distance = 0;
for (int j = 1; j * 73 < distance; j++ )
{
double x = (point1.x + ((deltaX / distance) * 73 * j));
double y = (point1.y + ((deltaY / distance) * 73 * j));
NSLog(#"My new point is x: %f y :%f", x, y);
}
Assuming that you already have code that tracks the user's touch as they move their touch around the screen, it sounds like you want to detect when they have moved a distance equal to the length of your image, at which time you want to draw another copy of your image under their touch.
To achieve this, I think you will need to:
calculate the length (width) of your image
implement code to draw copies of your image onto your view, rotated to any angle
each time the user's touch moves (e.g. in touchesMoved:):
calculate the delta of the touch each time it moves and generate the "length" of that delta (e.g. something like sqrt(dx^2 + dy^2))
accumulate the distance since the last image was drawn
if the distance has reached the length of your image, draw a copy of your image under the touch's current position, rotated appropriately (probably according to the vector from the position of the last image to the current position)
How does that sound?