What I want to do is move my finger across the screen (touchesMoved) and draw evenly spaced images (perhaps CGImageRefs) along the points generated by the touchesMoved. I can draw lines, but what I want to generate is something that looks like this (for this example I am using an image of an arrow but it could be any image, could be a picture of my dog :) ) The main thing is to get the images evenly spaced when drawing with a finger on an iPhone or iPad.
First of all HUGE props go out to Kendall. So, based on his answer, here is the code to take a UIImage, draw it on screen along a path (not a real pathRef, just a logical path created by the points) based on the distance between the touches and then rotate the image correctly based on the VECTOR of the current and previous points. I hope you like it:
First you need to load an image to be used as a CGImage over and over again:
NSString *imagePath = [[NSBundle mainBundle] pathForResource:#"arrow.png" ofType:nil];
UIImage *img = [UIImage imageWithContentsOfFile:imagePath];
image = CGImageRetain(img.CGImage);
make sure in your dealloc that you call
CGImageRelease(image);
then in touchesBegan, just store the starting point in a var that is scoped outside the method (declare it in your header like this :) in this case I am drawing into a UIView
#interface myView : UIView {
CGPoint lastPoint;
}
#end
then in touches Began:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
lastPoint = [touch locationInView:self];
}
and finally in touchesMoved, draw the bitmap to the screen and then when your distance has moved enough (in my case 73, since my image is 73 pixels x 73 pixels) draw that image to the screen, save the new image and set lastPoint equal to currentPoint
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
currentPoint = [touch locationInView:self];
double deltaX = lastPoint.x - currentPoint.x;
double deltaY = lastPoint.y - currentPoint.y;
double powX = pow(deltaX,2);
double powY = pow(deltaY,2);
double distance = sqrt(powX + powY);
if (distance >= 73){
lastPoint = currentPoint;
UIGraphicsBeginImageContext(self.frame.size);
[drawImage.image drawInRect:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)];
CGContextSaveGState(UIGraphicsGetCurrentContext());
float angle = atan2(deltaX, deltaY);
angle *= (M_PI / 180);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(currentPoint.x, currentPoint.y, 73, 73),[self CGImageRotatedByAngle:image angle:angle * -1]);
CGContextRestoreGState(UIGraphicsGetCurrentContext());
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
distance = 0;
}
}
- (CGImageRef)CGImageRotatedByAngle:(CGImageRef)imgRef angle:(CGFloat)angle
{
CGFloat angleInRadians = angle * (M_PI / 180);
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGRect imgRect = CGRectMake(0, 0, width, height);
CGAffineTransform transform = CGAffineTransformMakeRotation(angleInRadians);
CGRect rotatedRect = CGRectApplyAffineTransform(imgRect, transform);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bmContext = CGBitmapContextCreate(NULL,
rotatedRect.size.width,
rotatedRect.size.height,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextSetAllowsAntialiasing(bmContext, FALSE);
CGContextSetInterpolationQuality(bmContext, kCGInterpolationNone);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(bmContext,
+(rotatedRect.size.width/2),
+(rotatedRect.size.height/2));
CGContextRotateCTM(bmContext, angleInRadians);
CGContextTranslateCTM(bmContext,
-(rotatedRect.size.width/2),
-(rotatedRect.size.height/2));
CGContextDrawImage(bmContext, CGRectMake(0, 0,
rotatedRect.size.width,
rotatedRect.size.height),
imgRef);
CGImageRef rotatedImage = CGBitmapContextCreateImage(bmContext);
CFRelease(bmContext);
[(id)rotatedImage autorelease];
return rotatedImage;
}
this will create an image that looks like this :
Going to add the following (with some changes to the above code in order to try and fill in the voids where touchesMoved is missing some points when you move fast:
CGPoint point1 = CGPointMake(100, 200);
CGPoint point2 = CGPointMake(300, 100);
double deltaX = point2.x - point1.x;
double deltaY = point2.y - point1.y;
double powX = pow(deltaX,2);
double powY = pow(deltaY,2);
double distance = sqrt(powX + powY);
distance = 0;
for (int j = 1; j * 73 < distance; j++ )
{
double x = (point1.x + ((deltaX / distance) * 73 * j));
double y = (point1.y + ((deltaY / distance) * 73 * j));
NSLog(#"My new point is x: %f y :%f", x, y);
}
Assuming that you already have code that tracks the user's touch as they move their touch around the screen, it sounds like you want to detect when they have moved a distance equal to the length of your image, at which time you want to draw another copy of your image under their touch.
To achieve this, I think you will need to:
calculate the length (width) of your image
implement code to draw copies of your image onto your view, rotated to any angle
each time the user's touch moves (e.g. in touchesMoved:):
calculate the delta of the touch each time it moves and generate the "length" of that delta (e.g. something like sqrt(dx^2 + dy^2))
accumulate the distance since the last image was drawn
if the distance has reached the length of your image, draw a copy of your image under the touch's current position, rotated appropriately (probably according to the vector from the position of the last image to the current position)
How does that sound?
Related
Assuming I want to draw a line which resemble a clock dial (blue line), starting from the center of the screen (center) and ending at the user's touch position (A,B or C)
it does not matter how far the finger is, the dial will always have the same length size.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
CGPoint touchLocation = [touch locationInView:touch.view];
NSLog(#"Center point = %f %f",self.view.center.x,self.view.center.y);
NSLog(#"finter at point = %f %f",touchLocation.x,touchLocation.y);
// line re drawing itself ...
NSLog(#"end point = %f %f",?,?);
}
You need to know the length of your line. It's unrelated to the touch point correct?
First find the coordinates of the touch relative to the center point
x = Touch.x - center.x
y = Touch.y - center.y
Now we need to get the angle
angle = arctan(y / x)
If the x is negative, adjust by 180 degrees (pi) - This restores what is lost in the division.
Now multiply sin(angle) and cos(angle) by your desired length to get the new point
newX = cos(angle) * length
newY = sin(angle) * length
Here is some Swift code that gets you mostly there. Try it in a playground to verify different touch values and lengths.
let lineLength = 13.0
// Touch points
let x = -5.0
let y = -5.0
// Calculate angle
var angle = atan(y / x)
if x < 0 {
angle += 3.14159;
}
// Get new X and Y
var newX = cos(angle) * lineLength
var newY = sin(angle) * lineLength
I am drawing annotations on a view. The line annotation is causing a problem;
I have a parent class of Shape (extended from UIView). All the annotations are subclass of shape. In each annotation class i have override the drawRect method. DrawingCanvas class (extended from UIView and is a subview of my viewController's parent view) handles the pan gesture. Here is a piece of code
-(void) panGestureRecognizer: (UIPanGestureRecognizer *) panGesture {
static CGPoint initialPoint;
CGPoint translation = [panGesture translationInView:panGesture.view];
static Shape *shape;
if(panGesture.state == UIGestureRecognizerStateBegan) {
initialPoint = [panGesture locationInView:panGesture.view];
if(selectedShape == nil) {
if([selectedOption isEqualToString:#"line"]) {
shape = [[Line alloc] initWithFrame:CGRectMake(initialPoint.x, initialPoint.y, 10, 10) isArrowLine:NO];
((Line *)shape).pointStart = [panGesture.view convertPoint:initialPoint toView:shape];
}
[panGesture.view addSubview:shape];
}
[shape setNeedsDisplay];
}
else if(panGesture.state == UIGestureRecognizerStateChanged) {
if([shape isKindOfClass:[Line class]]) {
CGRect newRect = shape.frame;
if (translation.x < 0) {
newRect.origin.x = initialPoint.x + translation.x - LINE_RECT_OFFSET;
newRect.size.width = fabsf(translation.x) + LINE_RECT_OFFSET * 2;
}
else {
newRect.size.width = translation.x + LINE_RECT_OFFSET * 2;
}
if (translation.y < 0) {
newRect.origin.y = initialPoint.y + translation.y - LINE_RECT_OFFSET;
newRect.size.height = fabsf(translation.y) + LINE_RECT_OFFSET * 2;
}
else {
newRect.size.height = translation.y + LINE_RECT_OFFSET * 2;
}
shape.frame = newRect;
CGPoint endPoint = CGPointMake(initialPoint.x + translation.x, initialPoint.y + translation.y);
((Line *)shape).pointStart = [panGesture.view convertPoint:initialPoint toView:shape];
((Line *)shape).pointEnd = [panGesture.view convertPoint:endPoint toView:shape];
[shape setNeedsDisplay];
}
}
}
Line drawRect contains
-(void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
CGContextSetStrokeColorWithColor(context, self.color.CGColor);
CGContextSetFillColorWithColor(context, self.color.CGColor);
CGContextMoveToPoint(context, pointStart.x, pointStart.y);
CGContextAddLineToPoint(context, pointEnd.x, pointEnd.y);
CGContextSetLineWidth(context, 2.f);
CGContextStrokePath(context);
}
For moving and resizing i am handling touchEvents
For moving the annotation i am doing this in touchesMoved
UITouch *touch = [[event allTouches] anyObject];
CGPoint newPoint = [touch locationInView:self];
CGPoint previousPoint = [touch previousLocationInView:self];
CGRect rect = self.frame;
rect.origin.x += newPoint.x - previousPoint.x;
rect.origin.y += newPoint.y - previousPoint.y;
self.frame = rect;
[self setNeedsDisplay];
It's all good till here, but now for resizing the line by dragging the endpoints of it creates the confusion to me. I have placed to imageViews at the end point. On touches began i am detecting the end point like this
UITouch *touch = [[event allTouches] anyObject];
CGPoint touchPoint = [touch locationInView:self];
if(CGRectContainsPoint(imageView1.frame, touchPoint) || CGRectContainsPoint(imageView2.frame, touchPoint)) {
isEndPoint = YES;
}
if isEndPoint is YES then i have to resize the line. I need to know on which direction am i dragging. how to update the points (i have taken to class variables of CGPoint; pointStart, pointEnd) and the frame.
Kindly suggest the resizing trick to update the points and the frame.
For determining the line's direction:
You can do the following:
Line starting from Start Point:
a) Greater 'X' than the Start Point's X Line is approaching towards right side
b) Less 'X' than the Start Point's X line is approaching to the left side.
c) Greater Y'' than the Start point's 'Y' Line is going downwards.
d) Less 'Y' than the Start point's 'Y' Line is going Upwards.
Line starting from End Point:
a) Greater 'X' than the End Point's X Line is approaching towards right side
b) Less 'X' than the End Point's X line is approaching to the left side.
c) Greater Y'' than the End point's 'Y' Line is going downwards.
d) Less 'Y' than the End point's 'Y' Line is going Upwards.
If the line is being drawn from the End point then keep the Start Point unchanged, else if the line is starting from Start Point, keep the Start Point unchanged.
After drawing the line, you just need to update the Rect of the Line.
For that you need to add 2 more CGPoint properties to the Line's Class.
1. Initial point, 2. Last Point.
Initially set the initial point equivalent to the Start Point and initially set the Last Point equivalent to the End Point.
Afterwards on every piece of line added to the Current Line, just update these 2 properties with proper comparison to the Start and End Points. I am giving you a hint of proper comparisons: For example:
Compare all the 4 points that is Initial Point, Last Point , Start Point and End Point.
Updating Initial Point and Last Point:
Set the Initial Point's 'X' and 'Y' equivalent to the Minimum 'X' and Minimum 'Y' among all these 4 points.Set the Last Point's 'X' and 'Y' equivalent to the Maximum 'X' and Maximum 'Y' among these 4 points.
Make the Line's Rect according to these 2 points Initial Point and Last Point.
I hope this solves your problem.
I want to draw shape of small land on view by taking latitude and longitude at the corner of land.
I have wrote following code. For now I took hard core values.
- (void)drawRect:(CGRect)rect {
CGSize screenSize = [UIScreen mainScreen].applicationFrame.size;
SCALE = MIN(screenSize.width, screenSize.height) / (2.0 * EARTH_RADIUS);
OFFSET = MIN(screenSize.width, screenSize.height) / 2.0;
CGPoint latLong1 = {18.626103, 73.805023};
CGPoint latLong2 = {18.626444, 73.804884};
CGPoint latLong3 = {18.626226, 73.804969};
CGPoint latLong4 = {18.626103, 73.805023};
NSMutableArray *points=[NSMutableArray arrayWithObjects:[NSValue valueWithCGPoint:[self convertLatLongCoord:latLong1]],[NSValue valueWithCGPoint:[self convertLatLongCoord:latLong2]], [NSValue valueWithCGPoint:[self convertLatLongCoord:latLong3]],[NSValue valueWithCGPoint:[self convertLatLongCoord:latLong4]],nil];
CGContextRef ctx = UIGraphicsGetCurrentContext();
for(int i=0;i<points.count;i++)
{
// CGPoint newCoord = [self convertLatLongCoord:latLong];
NSValue *val = [points objectAtIndex:i];
CGPoint newCoord = [val CGPointValue];
if(i == 0)
{
// move to the first point
CGContextMoveToPoint(ctx, newCoord.x, newCoord.y);
}
else
{
CGContextAddLineToPoint(ctx, newCoord.x, newCoord.y);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1);
CGContextSetStrokeColorWithColor(ctx, [[UIColor redColor] CGColor]);
}
}
CGContextStrokePath(ctx);
}
Below is method which converts lat long into x,y co-ordinates.
- (CGPoint)convertLatLongCoord:(CGPoint)latLong
{
CGFloat x = EARTH_RADIUS * cos(latLong.x) * cos(latLong.y) * SCALE + OFFSET;
CGFloat y = EARTH_RADIUS * cos(latLong.x) * sin(latLong.y) * SCALE + OFFSET;
return CGPointMake(x, y);
}
My problem is when I took small land(e.g house land) area lat long its shape is not visible on view after draw. How I can show maximise shape of land on view.
Thanks in advance.
I'm trying create an application that allows the user to move a frame over an image, so that I can apply some effects on a selected region.
I need to allow the user to precisely drag and scale the masked-frame on the image. I need this to be exact, just like any other photo app does.
My strategy is to get the touch points of the user, on a touch-moved event, and scale my frame accordingly. That was pretty intuitive. I coded the following stuff for handling the touch moved event :
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint touchPoint = [touch locationInView:[self view]];
float currX = touchPoint.x;
float currY = touchPoint.y;
/*proceed with other operations on currX and currY,
which is coming out quite well*/
}
But the only problem is, the coordinates of the currX and currY variables are not quite where they are supposed to be. There is a parallax error, which keeps shifting from device to device. I also think the x and y coordinates gets swapped in the case of an iPad.
Could you please help me to figure out how to get the exact touch coordinates?
My background image is in one view (imageBG) and the masked frame is in a separate one (maskBG). I have tried out :
CGPoint touchPoint = [touch locationInView:[maskBG view]];
and
CGPoint touchPoint = [touch locationInView:[imageBG view]];
...but the same problem persists. I have also noticed the error on touch being worse on an iPad than on an iPhone or iPod.
image.center = [[[event allTouches] anyObject] locationInView:self.view];
Hi your issue is the image and the iPhone screen are not necessarily in same aspect ratio.Your touch point might not translate correctly to your actual image.
- (UIImage*) getCroppedImage {
CGRect rect = self.movingView.frame;
CGPoint a;
a.x=rect.origin.x-self.imageView.frame.origin.x;
a.y=rect.origin.y-self.imageView.frame.origin.y;
a.x=a.x*(self.imageView.image.size.width/self.imageView.frame.size.width);
a.y=a.y*(self.imageView.image.size.height/self.imageView.frame.size.height);
rect.origin=a;
rect.size.width=rect.size.width*(self.imageView.image.size.width/self.imageView.frame.size.width);
rect.size.height=rect.size.height*(self.imageView.image.size.height/self.imageView.frame.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, self.imageView.image.size.width, self.imageView.image.size.height);
// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
// draw image
[self.imageView.image drawInRect:drawRect];
// grab image
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
This is what i did to crop my moving view is the rect which i pass for cropping see how its being translated to reflect correctly on image.Make sure the image view on which the user sees image is aspectfit content mode.
Note:- I make the rect of image view fit the aspectFit image
use this to do it
- (CGSize)makeSize:(CGSize)originalSize fitInSize:(CGSize)boxSize
{
widthScale = 0;
heightScale = 0;
widthScale = boxSize.width/originalSize.width;
heightScale = boxSize.height/originalSize.height;
float scale = MIN(widthScale, heightScale);
CGSize newSize = CGSizeMake(originalSize.width * scale, originalSize.height * scale);
return newSize;
}
have you tried these:
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint touchPoint = [touch locationInView:selectedImageView];
float currX = (touchPoint.x)/selectedImageView.frame.size.width;
float currY = (touchPoint.y)/selectedImageView.frame.size.height;
/*proceed with other operations on currX and currY,
which is coming out quite well*/
}
or you can also use UIPanGestureRecognizer..
In my view I have few subviews. This is UIImageView.
Each UIImageView contain image with alpha chanel.
This is an image:
I use this method below for detecting my touch in location view:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
CGPoint touchLocation = [touch locationInView:self.view];
NSArray *views = [self.view subviews];
for (UIView *v in views) {
if([v isKindOfClass:[Piece class]]){
if (CGRectContainsPoint(v.frame, touchLocation) && ((Piece *)v).snapped == FALSE) {
UITouch* touchInPiece = [touches anyObject];
CGPoint point = [touchInPiece locationInView:(Piece *)v];
BOOL solidColor = [self verifyAlphaPixelImage:(Piece *)v atX:point.x atY:point.y];
if (solidColor) {
dragging = YES;
oldX = touchLocation.x;
oldY = touchLocation.y;
piece = (Piece *)v;
[self.view bringSubviewToFront:piece];
break;
}
}
}
}
}
and this method for verify alpha pixel
- (BOOL)verifyAlphaPixelImage:(Piece *)image atX:(int)x atY:(int)y{
CGImageRef imageRef = [image.image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
// CGFloat red = (rawData[byteIndex] ) ;
// CGFloat green = (rawData[byteIndex + 1] ) ;
// CGFloat blue = (rawData[byteIndex + 2] ) ;
CGFloat alpha = (rawData[byteIndex + 3] ) ;
NSLog(#"%f", alpha);
free(rawData);
if(alpha==255.0) return NO;
else return YES;
}
If alpha pixel founded I need to touch other UIImageView below UIImageView that I have tapp's before.
For example if I have stacked UIImageView and I touch on first:
now I should verify first UIImageView
if I touched on alpha pixel - > I should move to next UIImageView with this coord and verify it for alpha pixel too.
If second 3th, 4th or 5th have not alpha pixel with my coord I should select this UIImageView.
For now I verify my pixel - but my method return wrong value.
At the discussion for How to get pixel data from a UIImage (Cocoa
Touch) or CGImage (Core Graphics)?, there's a correction to the
routine you're using. Use calloc instead of malloc, or you'll start
getting arbitrary results.
If your Piece class ever scales images, you will need to scale its x and y inputs by the scale the image is being displayed at.
The touchesBegan method oddly looks at some other view it contains, not itself. Is there a reason for this? What class is touchesBegan in?
Subviews are stored in drawing order from back to front, so when you iterate through subviews, you'll look at (and potentially select) the rearmost object first. Iterate from the last element of subviews towards the front instead.
Even after this works, it will be very inefficient, rendering every Piece image every time the user taps. You'll ultimately want to cache the pixel data for every Piece for rapid lookup.