My requirement is to create an app that show live tracking of cab. Like famous car apps like Ola , uber and so on.
Please let me know how to update annotation , even for street turn and car reverse . How can simulate moving annotation using MKMapview. any library i have to use. I searched but i couldn't find any library
As I think the problem is smooth turning of the annotation on the map. As you can place your own custom image instead of the default blue dot.
for smooth turning you can use CMMotionManager as it gives you the acceleration so you can rotate the image by taking the reference of the annotation view. You can update for acceleration data by using update devicemotion data. As you get the useracceleration along x, y and z you can obtain the angle by tan function.This should solve your problem
code for getting the angle
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue mainQueue] withHandler:^(CMDeviceMotion *motion, NSError *error) {
double angle = atan(motion.userAcceleration.x/motion.userAcceleration.y);
}];
The solution to this problem is quite trivial. If you having problem implementing this library, why not to do it yourself and learn something? You just create some type of Vehicle model class that stores coordinate and previous coordinate. To be able to display it on the map it would have to adhere to MKAnnotation protocol - implement: title, subtitle and coordinate. The newCoordinate position will be set by default upon obtaining position data from network. You need to track two values to successfully animate.
and thus implement something like this:
#interface Vehicle : NSObject <MKAnnotation>
#property (nonatomic, readonly, copy) NSString *title;
#property (nonatomic, readonly, copy) NSString *subtitle;
#property (nonatomic, assign) CLLocationCoordinate2D coordinate;
#property (nonatomic, assign) CLLocationCoordinate2D newCoordinate;
#end
Upon setting the newCoordinate you move the previous value from custom setter into the coordinate property. Once you do this, you would just animate the annotation as usual.
// new coordinate obtained from networking
- (void)setNewCoordinate:(CLLocationCoordinate2D)newCoordinate {
// implement the value moving part
_newCoordinate = newCoordinate;
}
But be careful when detecting taps on the animated annotation, because of the way it works. Annotation frame will be set when animation starts to the value of finished frame. You would need to hitTest taps on the presentationLayer of the annotation which gets displayed on the Screen during the animation.
To handle the taps override
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
}
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
}
animate with
[UIView animateWithDuration:0
delay:0
options:UIViewAnimationOptionCurveEaseIn
animations:^{
} completion:nil];
I am sorry but I can't post code here since I have previously implemented this for my employer and am bound by contract.
Related
I'm currently creating a "slide-show" of pictures that the user can scroll through. Following a guide, I made it so that the UIScrollView I am using shows the edges of the previous and next pictures as the user scrolls along. This comes with a side-effect of the user not being able to scroll if he touches on one of the edges of the pictures, because these edges are technically not within the border of the UIScrollView itself. In order to compensate for this, I am going to create a UIView in which I will embed the UIScrollView. The UIVew will be extend the entire width of the page so that the user can scroll when touching the edges of the pictures. To do this, I need to connect the UIScrollView to the code of the UIView through an IBOutlet. Usually this is simply accomplished by Ctrl-clicking on the UIScrollView and dragging to the code. However, this does not seem to work for me and I am stumped as to why.
Here is a screenshot of the environment I am dealing with when I try to ctrl-click on the UIScrollView and drag to the code to create an IBOutlet (it simply doesn't give the option to create anything).
Here is a screenshot of what running the simulator produces. If i try to click and drag where my mouse currently is, it doesn't scroll, which is the problem I am trying to correct.
It is because you should link your storyboard view to UIView class. You can choose your ScrollViewController class in custom class settings. I added the sample jpg
what you are looking for is not that 'easy' by just connecting the two.
There are two options:
1) In the future you should use a UICollectionView and implement the paging behaviour yourself, that is the cleanest code I think.
NEVER MIND THE SECOND OPTION - I DIDN'T SEE YOU ACTUALLY ALREADY USED THIS
2) To directly answer your question:
You have to subclass the UIView that contains the UIScrollView like this:
header:
#import <UIKit/UIKit.h>
#interface SCTouchForwardView : UIView
#property (nonatomic, weak) IBOutlet UIView* receiver;
#property (nonatomic, assign) BOOL force;
#end
Implementation:
#import "SCTouchForwardView.h"
#implementation SCTouchForwardView
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
UIView* result = [super hitTest:point withEvent:event];
if (![result isKindOfClass:[SCTouchForwardView class]]) {
return result;
}
if (result==self) {
UIView *result = [self.receiver hitTest:[self convertPoint:point toView:self.receiver] withEvent:event];
if (result) {
return result;
}else{
if (self.force) {
return self.receiver;
}
return nil;
}
}
return nil;
}
#end
When you want to use this, you just have to Right-Click-Drag from the container to the UIScrollview and select 'receiver' and what you tried to do will work!
I am trying to implement a draggable annotation on MKMapView, referring to this post: ios 7 MKMapView draggable annotation change it's position when map is scrolled
I am able to add the drag function to the annotation and it works fine, except I found this weird bug:
after you drag the annotation
tap on the map view, you will see the annotation title box will disappear
tap on annotation again, try to bring the annotation title box, it crashes every time.
I have put the source code at below links, anybody seeing the same problem before?
https://www.dropbox.com/s/t8ipu12zyh04wzf/MapViewTest-master%202.zip
I have found the root cause, it is due to the annotation file I set. After change the annotation to something like:
#interface Annotation : NSObject <MKAnnotation>
#property (nonatomic, assign) CLLocationCoordinate2D coordinate;
#property (nonatomic, copy) NSString *title;
#property (nonatomic, copy) NSString *subtitle;
it works properly now
I am new to programming and really need help now. I have been looking for an answer I think for the past two month. I'm using Xcode and objective-c. My question is about collision detection. There are thousands of example on what to do when 2 rectangle collide using CGRECT, such as alert or flip screen or play sound ext, but nothing anywhere about doing NOTHING lol! All I want is my object not going through the other object! That is all I want to keep dragging it on the screen. I just don't want the 2 object on top of each other and it seems like I'm the only online in the world that wants to do that because I can't find anything. So please help and since I'm new .. as simple as possible please so here :
#import "YellowDot.h"
#interface YellowDot ()
#end
#implementation YellowDot
#synthesize Dot;
#synthesize CollisionImage;
-(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *Drag = [ [ event allTouches ] anyObject ];
Dot.center = [ Drag locationInView: self.view ];
[self checkCollison];
}
-(void) checkCollison
{
if (CGRectIntersectsRect(Dot.frame, CollisionImage.frame))
{
AudioServicesPlaySystemSound(playSoundId);
}
}
- (void)viewDidLoad
{
NSURL *SoundURL = [ NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:#"beep"ofType:#"wav"]];
AudioServicesCreateSystemSoundID((__bridge CFURLRef)SoundURL, & playSoundId);
[super viewDidLoad];
// Do any additional setup after loading the view.
}
and here is the .h file :
#import <UIKit/UIKit.h>
#import <AudioToolbox/AudioToolbox.h>
#interface YellowDot : UIViewController
{
IBOutlet UIImageView *Dot;
IBOutlet UIImageView *CollisionImage;
SystemSoundID playSoundId;
}
#property (nonatomic, retain) UIImageView *Dot;
#property (nonatomic, retain) UIImageView *CollisionImage;
#end
So what could go in there ? It's already playing a sound when colliding as you can see but that's it. Dot is the image that I'm dragging around the screen and Collision image is the one that I want Dot to collide with but stop as a wall. Hope its clear enough ( I'm French, so sorry for the bad writing) :S Thank you.
Given that you're successfully detecting the collision, the answer would seem to be that if the move causes a collision, then don't update the object to the new position. Just don't update Dot.center. The sequence would be: get a touch event for the move, precompute the place where the object is going to be, if no collision, move it; if collision, don't update it's location.
Note that OpenGL might be better suited to this type of thing, given you're going to do a lot of it.
Into the CALayer world:
I am creating a layer that needs to remain in middle of view regardless of device orientation. Can someone tell me why does my layer animates after rotation from the old position even though I removed it from superlayer? I understand that the frame and borderWidth properties are animatable but are they animatable even after removal from superLayer?
And if removal from superLayer does not reset the layer properties because the layer object has not been released (ok I can understand that), how do I mimic the behavior of a newly displayed layer so that the border does not shows like it is moving from an old position after rotation.
I created this sample project - cut and paste if you wish. You will just need to link the quartz core library.
#import "ViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface ViewController ()
#property (nonatomic,strong) CALayer *layerThatKeepAnimating;
#end
#implementation ViewController
-(CALayer*) layerThatKeepAnimating
{
if(!_layerThatKeepAnimating)
{
_layerThatKeepAnimating=[CALayer layer];
_layerThatKeepAnimating.borderWidth=2;
}
return _layerThatKeepAnimating;
}
-(void) viewDidAppear:(BOOL)animate
{
self.layerThatKeepAnimating.frame=CGRectMake(self.view.bounds.size.width/2-50,self.view.bounds.size.height/2-50, 100, 100);
[self.view.layer addSublayer:self.layerThatKeepAnimating];
}
-(void) willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration
{
[self.layerThatKeepAnimating removeFromSuperlayer];
}
-(void) didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromInterfaceOrientation
{
self.layerThatKeepAnimating.frame=CGRectMake(self.view.bounds.size.width/2-50,self.view.bounds.size.height/2-50, 100, 100);
[self.view.layer addSublayer:self.layerThatKeepAnimating];
}
#end
As odd as this sounds, the answer is to move code in
willRotateToInterfaceOrientation
to
viewWillLayoutSubviews
-(void) viewWillLayoutSubviews
{
self.layerThatKeepAnimating.frame=CGRectMake(self.view.bounds.size.width/2-50,self.view.bounds.size.height/2-50, 100, 100);
[self.view.layer addSublayer:self.layerThatKeepAnimating];
}
It looks like any layer "redrawing" here happens without animation, even if layer properties are animatable.
Well the problem is not what you think; when you remove that layer from the superview it is not actually nillified because you are retaining a strong reference to it. Your code doesn't enter the if statement in the getter for creating a fresh layer, because it is never nil after the first time:
if(!_layerThatKeepAnimating)
{
_layerThatKeepAnimating=[CALayer layer];
_layerThatKeepAnimating.borderWidth=2;
}
So either change your reference to layer in vc to weak:
#property (nonatomic, weak) CALayer * layerThatKeepAnimating;
Or delete it explicitly by:
-(void) willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration
{
[self.layerThatKeepAnimating removeFromSuperlayer];
self.layerThatKeepAnimating = nil;
}
I will suggest you to use the first option because as you add layers (or subviews) to views you already earn one strong reference. That's why it is always recommended do it like this:
#property (weak, nonatomic) IBOutlet UIView *view;
but not (strong, nonatomic).
[self.sublayerToRemove removeFromSuperlayer];
self.sublayerToRemove = nil;
When using the MapKit library and placing a pin/MKAnnotation, you will implement the MKAnnotation class having the coordinates, name and description:
#interface MyAnnotationClass : NSObject <MKAnnotation> {
NSString *_name;
NSString *_description;
CLLocationCoordinate2D _coordinate;
}
#property (nonatomic, retain) NSString *name;
#property (nonatomic, retain) NSString *description;
#property (nonatomic, readonly) CLLocationCoordinate2D coordinate;
-(id) initWithCoordinate:(CLLocationCoordinate2D) coordinate;
#end
These core details are displayed when a pin is placed on the map. However, when you want to have that same pin have a button, you implemente the following method in the MapViews Delegate:
-(MKAnnotationView *)mapView:(MKMapView *)mapview viewForAnnotation:(id <MKAnnotation>)annotation {
...
annotationView.rightCalloutAccessoryView = [UIButton buttonWithType:UIButtonTypeDetailDisclosure];
return(annotationView);
}
Why is this the convention? I would expect that the 'rightCalloutAccessory' would be a part of MKAnnotation. I know that this is optional, so could it be that the rightCallout object is part of a more specific method?
It is confusing to me because you are setting the attributes of a Pin when creating the MKAnnotation object, but then setting other attributes when the pin is being placed on the map. Why arent both of these done in the same place? Is it possible to do it that way?
Usually the MKAnnotation protocol is implemented by model objects. The model layer isn't supposed to know anything about views. So to honor the MVC pattern the callout view can not be part of the MKAnnotation object.
The philosophy behind this is mostly to postpone the need to set all the info on the view when it's really needed, rather than have a complex object graph in maps. It allows you also to postpone decisions.
Also, you notice you're talking about two different things: one is an annotation, the other one a view.