I'm working on an iOS app with a MapBox map.
I'm displaying MGLAnnotations but I only want to create the annotations that can be seen on the screen at a current moment.
Problem is : I have a UIView on the bottom of my screen so the bounds I get with mapView.visibleCoordinateBounds function is not really accurate because I can't see the bottom of my map.
I can't just resize my map because the bottom view I talked about doesn't cover the entire map, we still see parts of the map behind the bottom view.
So my question is, how can I get the visibleCoordinateBounds for a CGRect over my MapView ?
My current solution works only for the entire map view bounds and not the bounds of the CGRect area I want to display the annotations :
if ((mapView.visibleCoordinateBounds.sw).latitude...(mapView.visibleCoordinateBounds.ne).latitude ~= latX && (mapView.visibleCoordinateBounds.sw).longitude...(mapView.visibleCoordinateBounds.ne).longitude ~= lngY) {
print("VISIBLE")
}
Thanks
I finally found a way to do it.
I just calculate the percentage of my mapview hidden by the bottom view and I minus this number to the south latitude :
var valueToRemoveBottom = (mapView.visibleCoordinateBounds.ne).latitude - (mapView.visibleCoordinateBounds.sw).latitude
let percentageUsedByBottom = (bottomView.frame.height * 100 / mapView.frame.height) / 100
valueToRemoveBottom = valueToRemoveBottom * Double(percentageUsedByBottom)
if (((mapView.visibleCoordinateBounds.sw).latitude + valueToRemoveBottom)...((mapView.visibleCoordinateBounds.ne).latitude) ~= latX && (mapView.visibleCoordinateBounds.sw).longitude...(mapView.visibleCoordinateBounds.ne).longitude ~= lngY) {
// visible
}
Related
This is an example view:
I want to calculate a frame with a CGPoint where I can spawn another card(UIView) without touching any existing card. Ofcourse it is optional since the view can be full of cards, therefore there is no free spot.
This is how I can see any card on the screen and my function how it is now:
func freeSpotCalculator() -> CGPoint?{
var takenSpots = [CGPoint]()
for card in playableCards{
takenSpots.append(card.center)
}
}
I have no idea where to start and how to calculate a random CGPoint on the screen. The random frame has the same width and height as a card in on the screen.
The naive approach to this is very simple, but could be problematic once the screen fills up. Generate a random CGPoint with x coordinate between 0 and the screen width and a y coordinate between 0 and the screen height. Check if a rectangle with a center at that point intersects any existing view. If it does not, you have your random position.
Where this gets problematic is when the screen starts to fill up. At that point you could be trying many many random points before finding a place to put the card. You could also reach a situation where no more cards will fit. How do you know that you have reached that? Will your loop generating the random points just run forever?
A smarter solution is to keep track of the free spaces on the screen. Always generate your random points roughly within these free spaces. You could do this using a grid if approximate is close enough. Is there a card occupying each grid location? Then when the largest free space is smaller than the size of your card rectangle, you know you're done. It's a lot more work than the naive approach, but it's faster when the screen starts to fill up and you'll know for sure when you're done.
If you know that you will always have more screen space than the cards can possibly take up, the naive approach is fine.
The idea
You know the width and height of your container UIView. And, each card has the same width and height. I would go about this by calculating a grid.
Even though you want to display cards randomly, relying on a grid will give you a standardized array of centers that you can use to generate the appearance of randomness (place a card at any random center that is a part of the grid, for example).
If you were to place a card at truly any random location, you might just want to use CGRectIntersectsRect(card1.frame, card2.frame) to detect collisions.
The pattern
First, let's store the card width and height as constants.
let cardWidth = card.bounds.size.width
let cardHeight = card.bounds.size.height
As a basic proof of concept, let's say your container view width is 250 points. Let's say the card width is 5 points. That means you can fit 250 / 5 = 50 cards in one row, where one row has the height of one card.
The number of centers in a row = the number of cards in that row. Each center is the same distance apart. In the following diagram (if I can even call it that), the [ and ] represent edges of a row. The -|- represents a card, where | is the center of the card.
[ - | - - | - - | - - | - - | - ]
Notice how every center is two dashes away from the next center. The only consideration is that the center next to the edge is one dash away from the edge. In terms of cards, each center is one whole card away from the next, and the centers next to the edges are one half card away from the edges.
The key to the pattern
This pattern means that the x position of any card center in a specific row = (cardWidth / 2) + (the card index * cardWidth). In fact, this pseudo-equation works for y positions as well.
The code
Here's some Swift that creates an array of centers using this method.
var centers = [CGPoint]()
let numberOfRows: CGFloat = containerView.bounds.size.height / cardHeight
let numberOfCardsPerRow: CGFloat = containerView.bounds.size.width / cardWidth
for row in 0 ..< Int(numberOfRows) {
for card in 0 ..< Int(numberOfCardsPerRow) {
// The row we are on affects the y values of all the centers
let yBasedOnRow = (cardHeight / 2) + (CGFloat(row) * cardHeight)
// The xBasedOnCard formula is effectively the same as the yBasedOnRow one
let xBasedOnCard = (cardWidth / 2) + (CGFloat(card) * cardWidth)
// Each possible center for this row gets appended to the centers array
centers.append(CGPoint(x: xBasedOnCard, y: yBasedOnRow))
}
}
This code should create a grid of centers for your cards. You could build a function around it that returns a random center for a card to be placed and keeps track of used centers.
Potential improvements
First, I think that the centers array could be made a matrix ([[CGPoint]]()) for more logical storage of points.
Second, this code currently makes the assumption that the width and height of the container view are divisible by the card width and height. For example, a container width of 177 and a card width of 5 would result in some problems. The code could be fixed a number of different ways to account for this.
Best solution simplest/performance is to display card randomly BUT inside a grid. The trick is to have the grid bigger than the card size, so the card position inside the grid will be random.
Easy to check which position is occupy and cards will be on "random" frames.
1- Create a Collection View Controller with the total number of card u want to display (lets say.. max card that enter in the screen?)
2- Set the prototype cell size bigger than the card. If the card is 50x80 then the cell should be 70x110.
3- Add a UIImageView to the cell with constraints, this will be your card image
4- Create a UICollectionViewCell, with a method that set the card frames randomly inside the cell (modify the constraints)
Done!
Cells with no card will have no image or an empty cell as you wish. So to add a new card, just do a random between the empty cells and add the card with its random coordinates inside the cell.
Your UICollectionViewCell would like like this
class CardCollectionViewCell: UICollectionViewCell {
#IBOutlet weak var card: UIImageView!
override func awakeFromNib() {
super.awakeFromNib()
let newX = CGFloat(arc4random_uniform(UInt32(bounds.size.width-card.bounds.size.width+1)))
let newY = CGFloat(arc4random_uniform(UInt32(bounds.size.height-card.bounds.size.height+1)))
card.leftAnchor.constraintEqualToAnchor(leftAnchor, constant: newX).active = true
card.topAnchor.constraintEqualToAnchor(rightAnchor, constant: newY).active = true
}
}
And your Collections View Controller should like like this
Collection View Image
As I can see in the picture all your cards are aligned at the bottom of the View. so if you generate a random y position from 0 to origin of your cards row - one card height you can simply get a CGPoint based on the frame of your view and size of the cards.
If you want to randomly place cards along the screen, you could do something like this:
func random() -> CGFloat {
return CGFloat(Float(arc4random()) / 0xFFFFFFFF)
}
func random(min: CGFloat, max: CGFloat) -> CGFloat {
return random() * (max - min) + min
}
let actualX = random(min: *whatever amount*, max: *whatever amount* )
let actualY = random(min: *Height of the first card*, max: *whatever amount* )
card.position = CGPoint(x: actualX, y: actualY )
The cards will then be positioned randomly above the existing cards.
I am not sure if you are planning to place all the cards in an orderly way. But if you do, you could do it like this.
Once the view is loaded, get all the possible card positions and store them in a map together with a number used as the key. Then you could generate a random number from 0 to the total number of possible card positions that you stored in the map. Then every time you occupy a position, clear a value from the map.
You can try with CAShapeLayer and UIBezierPath.
Create a CAShapeLayer for your main view where you will be adding sub views. Let's call it as main shape layer. This will be helpful to check the new view estimated is within the main view.
Create a UIBezierPath instance. Whenever a valid new sub view is found, add the edges to this path.
Create a random point within the main view.
Create a CGRect based on random point as center of your sub view. Let’s call it as estimated view frame.
Check the estimated view frame is completely visible in your main view. Else go to step 3.
Check your 4 edges of your estimated view frame with path object. If any one of the edge is inside the path, go to step 3.
If 4 edges are not inside the path, the estimated view frame is the new view’s frame.
Create a new subview and add it to your main view.
Add the edges of new view to path.
I have created a sample project in swift with the above logic.
https://github.com/mcabasheer/find-free-space-in-uiview
You can change the width and height of new view. I have added a condition to stop looking for next free space after trying 50 times. This will help to avoid infinite loop.
As #davecom highlighted, taking a random number to add new view will waste the space and you will run out of space quickly. If you are able to maintain the free available space, you can add more sub views.
In my application I have an UIView.I want functionality such that user can drag the view from its original position to particular limited position for this I have used **UIPanGestureRecognizer Class ** and in gestureRecognizer.state == .Changed condition I am changing the coordinates of view .I am able to drag the view to limited position when moving slowly but The problem is if the user drags the view very rapidly upward or downward the screen, then the view can be pulled beyond the limits I put on the Y position
if(upperLimit > (self.topbaseConstrant.constant * -1))
{
self.topbaseConstrant.constant += gestureRecognizer.translationInView(self.view!).y
gestureRecognizer.setTranslation(CGPointZero, inView: self.view!)
}
I have been trying to solve the issue since last three days .Please give me suggestion
Thanks in advance
Use the min function to determine upper limits
let newPosition = topbaseConstrant.constant + panGestureRecognizer.translationInView(nil).y
topbaseConstrant.constant = min(upperLimit, newPosition)
If you drag quickly and blow past your constraint, the min function will always return that upper constraint as your new position.
Good day
I just started using Corona and I'm kinda confused with this x and y properties. Is it possible to perhaps get the x and y values using Top, Left, Width and Height properties if these are provided? For example, I want an an object to be at Left=10, Top=0, Width=40 and Height=40. Can someone please advise how I can do this, this could be for images, text, textfield etc
Of course. There are several methods to do this.
Example 1:
local myImage = display.newImageRect( "homeBg.png", 40, 40)
myImage.anchorX = 0; myImage.anchorY = 0
myImage.x = 10 -- Left gap
myImage.y = 0 -- Top gap
localGroup:insert(myImage)
Here, setting the anchor points to (0,0) will make the reference point of your images' geometric center to its top left corner.
Example 2:
local myImage = display.newImageRect( "homeBg.png", 40, 40)
myImage.x = (myImage.contentWidth/2) + 10
myImage.y = (myImage.contentHeight/2)
localGroup:insert(myImage)
Here, the center-X position of your image is calculated by adding Left gap to the image's half width itself. And the center-Y position is calculated by adding Top gap to the image's half height
You can position the objects with any of such methods. If you are a beginner in corona, then the following topics will be useful for you to get more knowledge about Displaying Objects with specific size, position, etc.
Corona SDK : display.newImageRect()
Tutorial: Anchor Points in Graphics 2.0
Corona uses like a Cartesian Coordinate System But the (0,0) is on the TOP LEFT you can view more here:
https://docs.coronalabs.com/guide/graphics/group.html#coordinates
BUT: You can get the screen corners based on the image width and height using this codes:
NOTE THAT YOU SHOULD YOU SHOULD CHANGE IT WITH YOUR IMAGE
local image = display.newImageRect(“images/yourImage.png”, width, height)
--TOP:
image.y = math.floor(display.screenOriginY + image*0.5)
--BOTTOM:
image.y = math.floor(screenH - display.screenOriginY) - image.height*.5
--LEFT:
image.x = (screenOffsetW*.5) + image*.5
--RIGHT:
image.x = math.floor(screenW - screenOffsetW*.5) - image.width*.5
Corona SDK display objects have attributes that can be read or set:
X = myObject.x -- gets the current center (by default) of myObject
width = myObject.width
You can set these values too....
myObject.x = 100 -- centers the object at 100px left of the content area.
By default Corona SDK's are based on their center, unless you change it's anchor point:
myObject.anchorX = 0
myObject.anchorY = 0
myObject.x = 100
myObject.y = 100
by setting the anchor's to 0, then .x and .y refer to the top left of the object.
I currently have a large map that goes off the screen, because of this its coordinate system is very different from my other nodes. This has led me to a problem, because I'm needing to generate a random CGPoint within the bounds of this map, and then if that point is frame/on-screen I place a visible node there. However the check on wether or not the node is on screen continuously fails.
I'm checking if the node is in frame with the following code: CGRectContainsPoint(self.frame, values) (With values being the random CGPoint I generated). Now this is where my problem comes in, the coordinate system of the frame is completely different from the coordinate system of the map.
For example, in the picture below the ball with the arrows pointing to it is at coordinates (479, 402) in the scene's coordinates, but they are actually at (9691, 9753) in the map's coordinates.
I determined the coordinates using the touchesBegan event for those who are wondering. So basically, how do I convert that map coordinate system to one that will work for the frame?
Because as seen below, the dot is obviously in the frame however the CGRectContainsPoint always fails. I've tried doing scene.convertPoint(position, fromNode: map) but it didn't work.
Edit: (to clarify some things)
My view hierarchy looks something like this:
The map node goes off screen and is about 10,000x10,000 for size. (I have it as a scrolling type map). The origin (Or 0,0) for this node is in the bottom left corner, where the map starts, meaning the origin is offscreen. In the picture above, I'm near the top right part of the map. I'm generating a random CGPoint with the following code (Passing it the maps frame) as an extension to CGPoint:
static func randPoint(within: CGRect) -> CGPoint {
var point = within.origin
point.x += CGFloat(arc4random() % UInt32(within.size.width))
point.y += CGFloat(arc4random() % UInt32(within.size.height))
return point;
}
I then have the following code (Called in didMoveToView, note that I'm applying this to nodes I'm generating - I just left that code out). Where values is the random position.
let values = CGPoint.randPoint(map.totalFrame)
if !CGRectContainsPoint(self.frame, convertPointToView(scene!.convertPoint(values, fromNode: map))) {
color = UIColor.clearColor()
}
To make nodes that are off screen be invisible. (Since the user can scroll the map background). This always passes as true, making all nodes invisible, even though nodes are indeed within the frame (As seen in the picture above, where I commented out the clear color code).
If I understand your question correctly, you have an SKScene that contains an SKSpriteNode that is larger than the scene's view, and that you are randomly generating coordinates within that sprite's coordinate system that you want to map to the view.
You're on the right track with SKNode's convertPoint(_:fromNode:) (where your scene is the SKNode and your map is the fromNode). That should get you from the generated map coordinate to the scene coordinate. Next, convert that coordinate to the view's coordinate system using your scene's convertPointToView(_:). The point will be out of bounds if it is not in view.
Using a worldNode which includes a playerNode and having the camera center on this node, you can check on/off with this code:
float left = player.position.x - 700;
float right = player.position.x + 700;
float up = player.position.y + 450;
float down = player.position.y - 450;
if((object.position.x > left) && (object.position.x < right) && (object.position.y > down) && (object.position.y < up)) {
if((object.parent == nil) && (object.dead == false)) {
[worldNode addChild:object];
}
} else {
if(object.parent != nil) {
[object removeFromParent];
}
}
The numbers I used above are static. You can also make them dynamic:
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat screenWidth = screenRect.size.width;
CGFloat screenHeight = screenRect.size.height;
Diving the screenWidth by 2 for left and right. Same for screenHeight.
I have an app with some annotations on it, and up until now they are just symbols that look good with the default behavior (e.g. like numbers and letters). Their directions are fixed in orientation with the device which is appropriate.
Now however I need some annotations that need to be fixed in orientation to the actual map, so if the map rotates, then the annotation symbols need to rotate with it (like an arrow indicating the flow of a river for example).
I don't want them to scale with the map like an overlay but I do want them to rotate with the map.
I need a solution that primarily works when the user manually rotates the map with their fingers, and also when it rotates due to be in tracking with heading mode.
On Google Maps (at least on android) this is very easy with a simple MarkerOptions.flat(true)
I am sure it won't be too much more difficult on ios, I hope.
Thanks in advance!
Here's what I used for something similar.
- (void)rotateAnnotationView:(MKAnnotationView *)annotationView toHeading:(double)heading
{
// Convert mapHeading to 360 degree scale.
CGFloat mapHeading = self.mapView.camera.heading;
if (mapHeading < 0) {
mapHeading = fabs(mapHeading);
} else if (mapHeading > 0) {
mapHeading = 360 - mapHeading;
}
CGFloat offsetHeading = (heading + mapHeading);
while (offsetHeading > 360.0) {
offsetHeading -= 360.0;
}
CGFloat headingInRadians = offsetHeading * M_PI / 180;
annotationView.layer.affineTransform = CGAffineTransformMakeRotation(headingInRadians);
}
And then, this is called in regionDidChange etc.
Unfortunately, this solution doesn't rotate while the user is rotating the map, but it corrects itself afterwards to make sure it has the proper heading. I wrap some of the affineTransform into an animation block to make it look nice.
Hopefully this can help, or maybe help get you pointed in the right direction.