This is an example view:
I want to calculate a frame with a CGPoint where I can spawn another card(UIView) without touching any existing card. Ofcourse it is optional since the view can be full of cards, therefore there is no free spot.
This is how I can see any card on the screen and my function how it is now:
func freeSpotCalculator() -> CGPoint?{
var takenSpots = [CGPoint]()
for card in playableCards{
takenSpots.append(card.center)
}
}
I have no idea where to start and how to calculate a random CGPoint on the screen. The random frame has the same width and height as a card in on the screen.
The naive approach to this is very simple, but could be problematic once the screen fills up. Generate a random CGPoint with x coordinate between 0 and the screen width and a y coordinate between 0 and the screen height. Check if a rectangle with a center at that point intersects any existing view. If it does not, you have your random position.
Where this gets problematic is when the screen starts to fill up. At that point you could be trying many many random points before finding a place to put the card. You could also reach a situation where no more cards will fit. How do you know that you have reached that? Will your loop generating the random points just run forever?
A smarter solution is to keep track of the free spaces on the screen. Always generate your random points roughly within these free spaces. You could do this using a grid if approximate is close enough. Is there a card occupying each grid location? Then when the largest free space is smaller than the size of your card rectangle, you know you're done. It's a lot more work than the naive approach, but it's faster when the screen starts to fill up and you'll know for sure when you're done.
If you know that you will always have more screen space than the cards can possibly take up, the naive approach is fine.
The idea
You know the width and height of your container UIView. And, each card has the same width and height. I would go about this by calculating a grid.
Even though you want to display cards randomly, relying on a grid will give you a standardized array of centers that you can use to generate the appearance of randomness (place a card at any random center that is a part of the grid, for example).
If you were to place a card at truly any random location, you might just want to use CGRectIntersectsRect(card1.frame, card2.frame) to detect collisions.
The pattern
First, let's store the card width and height as constants.
let cardWidth = card.bounds.size.width
let cardHeight = card.bounds.size.height
As a basic proof of concept, let's say your container view width is 250 points. Let's say the card width is 5 points. That means you can fit 250 / 5 = 50 cards in one row, where one row has the height of one card.
The number of centers in a row = the number of cards in that row. Each center is the same distance apart. In the following diagram (if I can even call it that), the [ and ] represent edges of a row. The -|- represents a card, where | is the center of the card.
[ - | - - | - - | - - | - - | - ]
Notice how every center is two dashes away from the next center. The only consideration is that the center next to the edge is one dash away from the edge. In terms of cards, each center is one whole card away from the next, and the centers next to the edges are one half card away from the edges.
The key to the pattern
This pattern means that the x position of any card center in a specific row = (cardWidth / 2) + (the card index * cardWidth). In fact, this pseudo-equation works for y positions as well.
The code
Here's some Swift that creates an array of centers using this method.
var centers = [CGPoint]()
let numberOfRows: CGFloat = containerView.bounds.size.height / cardHeight
let numberOfCardsPerRow: CGFloat = containerView.bounds.size.width / cardWidth
for row in 0 ..< Int(numberOfRows) {
for card in 0 ..< Int(numberOfCardsPerRow) {
// The row we are on affects the y values of all the centers
let yBasedOnRow = (cardHeight / 2) + (CGFloat(row) * cardHeight)
// The xBasedOnCard formula is effectively the same as the yBasedOnRow one
let xBasedOnCard = (cardWidth / 2) + (CGFloat(card) * cardWidth)
// Each possible center for this row gets appended to the centers array
centers.append(CGPoint(x: xBasedOnCard, y: yBasedOnRow))
}
}
This code should create a grid of centers for your cards. You could build a function around it that returns a random center for a card to be placed and keeps track of used centers.
Potential improvements
First, I think that the centers array could be made a matrix ([[CGPoint]]()) for more logical storage of points.
Second, this code currently makes the assumption that the width and height of the container view are divisible by the card width and height. For example, a container width of 177 and a card width of 5 would result in some problems. The code could be fixed a number of different ways to account for this.
Best solution simplest/performance is to display card randomly BUT inside a grid. The trick is to have the grid bigger than the card size, so the card position inside the grid will be random.
Easy to check which position is occupy and cards will be on "random" frames.
1- Create a Collection View Controller with the total number of card u want to display (lets say.. max card that enter in the screen?)
2- Set the prototype cell size bigger than the card. If the card is 50x80 then the cell should be 70x110.
3- Add a UIImageView to the cell with constraints, this will be your card image
4- Create a UICollectionViewCell, with a method that set the card frames randomly inside the cell (modify the constraints)
Done!
Cells with no card will have no image or an empty cell as you wish. So to add a new card, just do a random between the empty cells and add the card with its random coordinates inside the cell.
Your UICollectionViewCell would like like this
class CardCollectionViewCell: UICollectionViewCell {
#IBOutlet weak var card: UIImageView!
override func awakeFromNib() {
super.awakeFromNib()
let newX = CGFloat(arc4random_uniform(UInt32(bounds.size.width-card.bounds.size.width+1)))
let newY = CGFloat(arc4random_uniform(UInt32(bounds.size.height-card.bounds.size.height+1)))
card.leftAnchor.constraintEqualToAnchor(leftAnchor, constant: newX).active = true
card.topAnchor.constraintEqualToAnchor(rightAnchor, constant: newY).active = true
}
}
And your Collections View Controller should like like this
Collection View Image
As I can see in the picture all your cards are aligned at the bottom of the View. so if you generate a random y position from 0 to origin of your cards row - one card height you can simply get a CGPoint based on the frame of your view and size of the cards.
If you want to randomly place cards along the screen, you could do something like this:
func random() -> CGFloat {
return CGFloat(Float(arc4random()) / 0xFFFFFFFF)
}
func random(min: CGFloat, max: CGFloat) -> CGFloat {
return random() * (max - min) + min
}
let actualX = random(min: *whatever amount*, max: *whatever amount* )
let actualY = random(min: *Height of the first card*, max: *whatever amount* )
card.position = CGPoint(x: actualX, y: actualY )
The cards will then be positioned randomly above the existing cards.
I am not sure if you are planning to place all the cards in an orderly way. But if you do, you could do it like this.
Once the view is loaded, get all the possible card positions and store them in a map together with a number used as the key. Then you could generate a random number from 0 to the total number of possible card positions that you stored in the map. Then every time you occupy a position, clear a value from the map.
You can try with CAShapeLayer and UIBezierPath.
Create a CAShapeLayer for your main view where you will be adding sub views. Let's call it as main shape layer. This will be helpful to check the new view estimated is within the main view.
Create a UIBezierPath instance. Whenever a valid new sub view is found, add the edges to this path.
Create a random point within the main view.
Create a CGRect based on random point as center of your sub view. Let’s call it as estimated view frame.
Check the estimated view frame is completely visible in your main view. Else go to step 3.
Check your 4 edges of your estimated view frame with path object. If any one of the edge is inside the path, go to step 3.
If 4 edges are not inside the path, the estimated view frame is the new view’s frame.
Create a new subview and add it to your main view.
Add the edges of new view to path.
I have created a sample project in swift with the above logic.
https://github.com/mcabasheer/find-free-space-in-uiview
You can change the width and height of new view. I have added a condition to stop looking for next free space after trying 50 times. This will help to avoid infinite loop.
As #davecom highlighted, taking a random number to add new view will waste the space and you will run out of space quickly. If you are able to maintain the free available space, you can add more sub views.
Related
So I am developing a game using Spritekit that uses a pyramid of Sprites (let's say circles for a simple instance). The user can choose the amount of rows of sprites they would like to have in the game. The sprites are to form a pyramid, so if you have 1 row, you have 1 sprite node. It increases by 2 the farther down you go (the more rows you choose) - creating the pyramid shape. So if a user picked 3 rows, the game board would look like this:
O
O O O
O O O O O
However, when it gets to 5 rows, it loses its pyramid shape because the screen is only so wide and it has to fit all the elements onto the screen (elements are more smushed together in rows further down).
My question is, to fix this issue, what would I have to do to make the pyramid resize and change its spacing between elements depending on how many rows are chosen? Would I have to multiply the spacing by a certain factor? I have also heard of people adding layers onto the screen - maybe drawing the sprites in some sort of container so that it always resizes the pyramid to fit the screen without skewing the pyramid shape?
Your idea is correct! Make a SKNode container, then update it's .size property, or do .setScale.
(not at xcode right now, pardon if not 100%)
// Say that our scene's size is 400x400:
let bkg = SKShapeNode(rectangleOfSize: self.size)
bkg.addChild(firstSprite)
bkg.addChild(secondSprite) // And so on...
// Find the farthest point in bkg:
var farthestX = CGFloat(0)
for node in bkg.children {
if node.position.x + node.frame.size.width / 2 > farthestX {
farthestX = node.position.x + node.frame.size.width / 2
}
}
// Do math to resize the bkg:
if self.size.width < farthestX {
let scaler = self.size.width / farthestX
bkg.setScale(scaler)
}
This should work, or at least the general idea should work... You would want to check for Y values and Height as well.
You can easily compute a symmetrical size proportional to the number of rows and resize your sprites accordingly. This is my idea in pseudocode:
let computedSize = deviceWidth/(2*(rows-1) + 1)
for sprite in sprites {
sprite.size.width = computedSize
sprite.size.height = computedSize
}
I'm building an app that draw the waveform of input audio data.
Here is a visual representation of how it looks:
It behaves in similar way to Apple's native VoiceMemos app. But it lacks the performance. Waveform itself is a UIScrollView subclass where I draw instances of CALayer to represent purple 'bars' and add them as a sublayers. At the beginning waveform is empty, when sound input starts I update waveform with this function:
class ScrollingWaveformPlot: UIScrollView {
var offset: CGFloat = 0
var normalColor: UIColor?
var waveforms: [CALayer] = []
var lastBarRect: CGRect?
var kBarWidth: Int = 5
func updateGraph(with value: Float) {
//Create instance
self.lastBarRect = CGRect(x: self.offset,
y: self.frame.height / 2,
width: CGFloat(self.barWidth),
height: -(CGFloat)(value * 2))
let barLayer = CALayer()
barLayer.bounds = self.lastBarRect!
barLayer.position = CGPoint(x: self.offset + CGFloat(self.barWidth) / 2,
y: self.frame.height / 2)
barLayer.backgroundColor = self.normalColor?.cgColor
self.layer.addSublayer(barLayer)
self.waveforms.append(barLayer)
self.offset += 7
}
...
}
When last rect of purple bar reaches the middle of screen I begin to increase contentOffset.x of waveform to keep it running like Apple's VoiceMemos app.
The problem is: when bar count reaches ~500...1000 some noticeable lag of waveform begin to happen during setContentOffset.
self.inputAudioPlot.setContentOffset(CGPoint(x: CGFloat(self.offset) - CGFloat(self.view.frame.width / 2 - 7),y: 0), animated: false)
What can be optimized here? Any help appreciated.
Simply remove the bars that get scrolled off-screen from their superlayer. If you want to get fancy you could put them in a queue to reuse when adding new samples, but this might not be worth the effort.
If you don’t let the user scroll inside that view it might even be worth it to not use a scroll view at all and instead move the visible bars to the left when you add a new one.
Of course if you do need to let the user scroll you have some more work to do. Then you first have to store all values you are displaying somewhere so you can get them back. With this you can override layoutSubviews to add all missing bars during scrolling.
This is basically how UITableView and UICollectionView works, so you might be able to implement this using a collection view and a custom layout. Doing it manually though might be easier and also perform a little better as well.
I suspect that this will probably be a larger refactoring than you can afford but SpriteKit would probably give you better performance and control over rendering of the waveform.
You can use SpiteNodes (which are much faster than shape nodes) for the wave bars. You can implement scrolling by merely moving a parent node that contains all these sprites.
Spritekit is quite good at skipping non visible node when rendering and you can even dynamically remove/add node from the scene if the number of nodes becomes a problem.
I know scene coordinates are (0,0) at center so how can i distribute 4 SKNodes along the width using a for loop
for index 1...4 {
let node = SKNode()
node.postion = evenly distribute along the scene width. who can i do it?
}
You could use the width of the scene divided by the number of nodes minus 1. This would then be multiplied by the index minus 1, then added to the minimum x value of your scene.
let widthRange = scene.frame.maxX - scene.frame.minX
for index 1...4 {
let node = SKNode()
node.postion = scene.frame.minX + widthRange/(4-1) * (index-1)
}
This should place each node an equal distance away from each other along the width of the scene, starting from each edge.
If your index for the for loop doesn't start at 1, the code would need to be modified to something like this:
let widthRange = scene.frame.maxX - scene.frame.minX
var counter = 0
for index 352...356 {
let node = SKNode()
node.postion = scene.frame.minX + widthRange/(4-1) * counter
counter+=1
}
If you're unsure of which version to use, use the second version, as it's less reliant on the format of the for loop (as long as you reset counter to 0 before every time you use this).
If you need any clarifications as to what does what, feel free to ask.
Edit
This should clarify what the code actually does.
let widthRange = scene.frame.maxX - scene.frame.minX is used to get the size of the scene, then set it to a variable that is used later. Making this a variable is sort of optional as you could just use the scene size, however I put it this way so it's less messy.
var counter = 0 simply makes a counter variable, that's used in the for-loop, to place the nodes. Make sure to set it to 0 before every time you run the loop. counter+=1 is then used later so each node is multiplied by an increasing value, which creates the different x-positions. (I'll get into that later)
node.postion = scene.frame.minX + widthRange/(4-1) * counter is the complicated line. First, I use scene.frame.minX to start placement from the lowest width/x-position. widthRange/(4-1) is a little harder to understand. It uses the scenes width (from earlier), then divides by one less than the amount of nodes. The best way to understand this, is that to cut a rectangle (only in a single direction) so it has 2 cuts through it (2 of the nodes), and 2 edges (the other 2 nodes), you would need 3 sections. (The rectangle would be cut into 3). This gets the distance between each node. The pattern of #ofNodes-1 follows for every amount of nodes. The * counter part is the part that makes the node's x-position actually change. Since every time the loop is gone through, counter increases by 1 (starting at 0) the amount of "sections" from before (widthRange/(4-1)) is increased by 1.
In my application I have an UIView.I want functionality such that user can drag the view from its original position to particular limited position for this I have used **UIPanGestureRecognizer Class ** and in gestureRecognizer.state == .Changed condition I am changing the coordinates of view .I am able to drag the view to limited position when moving slowly but The problem is if the user drags the view very rapidly upward or downward the screen, then the view can be pulled beyond the limits I put on the Y position
if(upperLimit > (self.topbaseConstrant.constant * -1))
{
self.topbaseConstrant.constant += gestureRecognizer.translationInView(self.view!).y
gestureRecognizer.setTranslation(CGPointZero, inView: self.view!)
}
I have been trying to solve the issue since last three days .Please give me suggestion
Thanks in advance
Use the min function to determine upper limits
let newPosition = topbaseConstrant.constant + panGestureRecognizer.translationInView(nil).y
topbaseConstrant.constant = min(upperLimit, newPosition)
If you drag quickly and blow past your constraint, the min function will always return that upper constraint as your new position.
I wanted to add a possibility to drag elements inside the carousel and I have some troubles with positioning and transformations of a draggable element.
As I understand it is better to temporarily hide/delete the draggable element and create its proxy outside the carousel item array which represents the position and transform properties of the real one.
The problem is how to set the correct transform of such proxy item according to its position. One view from items array has its own index and carousel scroll offset (which is common for all the items). But the separated view has a position only (in better case it stores the transform of the original carousel item and I can find the nearest view/views).
So how to convert the given screen point to offset in iCarousel coordinate system? The transform matrices are enough complex: rotation, perspective and translation.
I have found a solution but in my particular case it looks awful. I need to move along y only so on each drag position changed event:
CGFloat newY = currentTouchPosition.y - positionOnTouchStartDragging.y + (draggedPageIndex - carousel.scrollOffset) * spacing * carousel.bounds.size.height;
CGFloat delta = self.bounds.size.height * spacing;
CATransform3D result = [carousel transformForItemViewWithOffset:newY / deltaY];
where spacing is a variable given in carousel or via its delegate.