I have the following demo app, In which I set an ARWorldTrackingConfiguration over my RealityKit.
I also use plane detection.
When a plane is detected, I add the ability to "Fire" a rectangle on to the plane with a simple square collision box.
After about 100 squares, the app thermalStatus changes to serious and my frame rate goes down to 30fps.
For the life of me, I can't understand why 100 simple shapes in an RealityKit world, with no special textures or even collision events will cause this.
Does anyone have any idea?
PS1: Running this on an iPhone XS, which should be able to perform better according to HW specifications.
PS2: Adding the code below
import UIKit
import RealityKit
import ARKit
let material = SimpleMaterial(color: .systemPink, isMetallic: false)
var sphere: MeshResource = MeshResource.generatePlane(width: 0.1, depth: 0.1)
var box = ShapeResource.generateBox(width: 0.1, height: 0.03, depth: 0.1)
var ballEntity = ModelEntity(mesh: sphere, materials: [material])
let collider = CollisionComponent(
shapes: [box],
mode: .trigger
)
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
#IBOutlet weak var button: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.vertical]
configuration.worldAlignment = .camera
// Add the box anchor to the scene
configuration.frameSemantics.remove(.bodyDetection)
configuration.frameSemantics.remove(.personSegmentation)
configuration.frameSemantics.remove(.personSegmentationWithDepth)
arView.renderOptions.insert(.disableCameraGrain)
arView.renderOptions.insert(.disableGroundingShadows)
arView.renderOptions.insert(.disableHDR)
arView.renderOptions.insert(.disableMotionBlur)
arView.renderOptions.insert(.disableFaceOcclusions)
arView.renderOptions.insert(.disableDepthOfField)
arView.renderOptions.insert(.disablePersonOcclusion)
configuration.planeDetection = [.vertical, .horizontal]
arView.debugOptions = [.showAnchorGeometry, .showStatistics]
let gesture = UITapGestureRecognizer(target: self,
action: #selector(self.tap(_:)))
arView.addGestureRecognizer(gesture)
arView.session.run(configuration, options: [ .resetSceneReconstruction ])
}
#objc func tap(_ sender: UITapGestureRecognizer) {
let point: CGPoint = sender.location(in: arView)
guard let query = arView.makeRaycastQuery(from: point,
allowing: .existingPlaneGeometry,
alignment: .vertical) else {
return
}
let result = arView.session.raycast(query)
guard let raycastResult = result.first else { return }
let anchor = AnchorEntity(raycastResult: raycastResult)
var ballEntity = ModelEntity(mesh: sphere, materials: [material])
ballEntity.collision = collider
anchor.addChild(ballEntity)
arView.scene.anchors.append(anchor)
}
#IBAction func removePlaneDebugging(_ sender: Any) {
if arView.debugOptions.contains(.showAnchorGeometry) {
arView.debugOptions.remove(.showAnchorGeometry)
button.setTitle("Display planes", for: .normal)
return
}
button.setTitle("Remove planes", for: .normal)
arView.debugOptions.insert(.showAnchorGeometry)
}
}
Can anyone please assist?
When you use ARKit or RealityKit, a thermal condition of your iPhone doesn't entirely depend on a realtime rendering of 100 primitives. The point is a 6-DOF world tracking running at 60 fps. Plane Detection is another hard core feature for your device. So, World Tracking and Plane Detection ops are extremely CPU/GPU intensive (as well as Image/Object Detection or People Occlusion). They also quickly drain your battery.
Another heavy burden for your CPU/GPU are shadows and reflective metallic shaders. Realtime 60 fps soft shadows are additional task for any device (even with A12 processor) and metallic textures that use environment reflectons are calculated on neural engines.
Related
I have an app that I am trying to update from SceneKit to RealityKit, and one of the features that I am having a hard time replicating in RealityKit is making an entity constantly look at the camera. In SceneKit, this was accomplished by adding the following billboard constraints to the node:
let billboardConstraint = SCNBillboardConstraint()
billboardConstraint.freeAxes = [.X, .Y]
startLabelNode.constraints = [billboardConstraint]
Which would allow the startLabelNode to freely rotate so that it was constantly facing the camera without the startLabelNode changing its position.
However, I can't seem to figure out a way to do this with RealityKit. I have tried the "lookat" method, which doesn't seem to offer the ability to constantly face the camera. Here is a short sample app where I have tried to implement a version of this in RealityKit, but it doesn't offer the ability to have the entity constantly face the camera like it did in SceneKit:
import UIKit
import RealityKit
import ARKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet weak var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
arView.environment.sceneUnderstanding.options = []
arView.debugOptions.insert(.showSceneUnderstanding) // Display a debug visualization of the mesh.
arView.renderOptions = [.disablePersonOcclusion, .disableDepthOfField, .disableMotionBlur] // For performance, disable render options that are not required for this app.
arView.automaticallyConfigureSession = false
let configuration = ARWorldTrackingConfiguration()
if ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh) {
configuration.sceneReconstruction = .mesh
} else {
print("Mesh Classification not available on this device")
configuration.worldAlignment = .gravity
configuration.planeDetection = [.horizontal, .vertical]
}
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
UIApplication.shared.isIdleTimerDisabled = true // Prevent the screen from being dimmed to avoid interrupting the AR experience.
}
#IBAction func buttonPressed(_ sender: Any) {
let screenWidth = arView.bounds.width
let screenHeight = arView.bounds.height
let centerOfScreen = CGPoint(x: (screenWidth / 2), y: (screenHeight / 2))
if let raycastResult = arView.raycast(from: centerOfScreen, allowing: .estimatedPlane, alignment: .any).first
{
addStartLabel(at: raycastResult.worldTransform)
}
}
func addStartLabel(at result: simd_float4x4) {
let resultAnchor = AnchorEntity(world: result)
resultAnchor.addChild(clickToStartLabel())
arView.scene.addAnchor(resultAnchor)
}
func clickToStartLabel() -> ModelEntity {
let text = "Click to Start Here"
let textMesh = MeshResource.generateText(text, extrusionDepth: 0.001, font: UIFont.boldSystemFont(ofSize: 0.01))
let textMaterial = UnlitMaterial(color: .black)
let textModelEntity = ModelEntity(mesh: textMesh, materials: [textMaterial])
textModelEntity.generateCollisionShapes(recursive: true)
textModelEntity.position.x -= textMesh.width / 2
textModelEntity.position.y -= textMesh.height / 2
let planeMesh = MeshResource.generatePlane(width: (textMesh.width + 0.01), height: (textMesh.height + 0.01))
let planeMaterial = UnlitMaterial(color: .white)
let planeModelEntity = ModelEntity(mesh: planeMesh, materials: [planeMaterial])
planeModelEntity.generateCollisionShapes(recursive:true)
// move the plane up to make it sit on the anchor instead of in the middle of the anchor
planeModelEntity.position.y += planeMesh.height / 2
planeModelEntity.addChild(textModelEntity)
// This does not always keep the planeModelEntity facing the camera
planeModelEntity.look(at: arView.cameraTransform.translation, from: planeModelEntity.position, relativeTo: nil)
return planeModelEntity
}
}
extension MeshResource {
var width: Float
{
return (bounds.max.x - bounds.min.x)
}
var height: Float
{
return (bounds.max.y - bounds.min.y)
}
}
Is the lookat function the best way to get the missing feature working in RealityKit or is there a better way to have a Entity constantly face the camera?
k - I haven't messed with RK much, but assuming entity is a scenekit node? - then set constraints on it and it will be forced to face 'targetNode' at all times. Provide that works the way you want it to, then you may have to experiment with how the node is initially created IE what direction it is facing.
func setTarget()
{
node.constraints = []
let vConstraint = SCNLookAtConstraint(target: targetNode)
vConstraint.isGimbalLockEnabled = true
node.constraints = [vConstraint]
}
I was able to figure out an answer to my question. Adding the following block of code allowed the entity to constantly look at the camera:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
planeModelEntity.look(at: arView.cameraTransform.translation, from: planeModelEntity.position(relativeTo: nil), relativeTo: nil)
}
I want to place a 3D Model from the local device on top of the reference image when it's recognized. To achieve this I have tried the following:
Adding the reference image to the session configuration:
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
// Check if the device supports the AR experience
if (!ARConfiguration.isSupported) {
TLogger.shared.error_objc("Device does not support Augmented Reality")
return
}
guard let qrCodeReferenceImage = UIImage(named: "QRCode") else { return }
let detectionImages: Set<ARReferenceImage> = convertToReferenceImages([qrCodeReferenceImage])
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = detectionImages
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
Using the ARSessionDelegate to get notified when the reference image was detected and placing the 3D model at the same position as his ARImageAnchor:
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
for anchor in anchors {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let position = imageAnchor.transform
addEntity(self.localModelPath!, position)
}
}
func addEntity(_ modelPath: URL, _ position: float4x4) {
// Load 3D Object as Entity
let entity = try! Entity.load(contentsOf: modelPath)
// Create the Anchor which gets added to the AR Scene
let anchor = AnchorEntity(world: position)
anchor.addChild(entity)
anchor.transform.matrix = position
arView.scene.addAnchor(anchor)
}
However, whenever I try to place the anchor including my 3D model (the entity) at a specific position, it doesn't appear in the arView. It seems like the model is getting loaded though since a few frames are getting lost when executing the addEntity function. When I don't specifically set the anchors position the model appears in front of the camera.
Can anyone lead me in the right direction here?
Solution I
To make your code work properly, remove this line:
anchor.transform.matrix = position
#TimLangner – "...But I want to make the model appear on top of the reference image and not at any other location..."
As you can see, my test sphere has appeared on top of the reference image. When changing its position, remember that the Y-axis of the image anchor is directed towards the camera if the QR is vertical, and is directed up if the QR code is on a horizontal surface.
Make sure, a pivot point is located where it should.
In my case (you can see that I'm using AppClipCode), to move sphere 30 cm up, I have to move along negative Z direction.
entity.position.z = anchor.position.z - 0.3
Solution II
In my opinion, the most simple and productive solution would be to use the RealityKit's native AnchorEntity(.image(...)), without the need of implementing ARImageAnchor in delegate's method.
AnchorEntity(.image(group: "GroupName", name: "forModel"))
Here's a code:
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let entity = ModelEntity(mesh: .generateSphere(radius: 0.1))
let anchor = AnchorEntity(.image(group: "AR Resources",
name: "appClipCode"))
anchor.addChild(entity)
entity.position.z = anchor.position.z - 0.3
arView.scene.anchors.append(anchor)
}
}
I am creating an iOS ARKit app where I wanted to place a large object in Augmented Reality.
When I am trying to place the object at a particular position it always appears to be moving with the change in camera position and I am not able to view the object from all angles by changing the camera position.
But if I reduce it's scale value to 0.001 (Reducing the size of the object), I am able to view the object from all angles and the position of the placed object also does not change to that extent.
Bounding Box of the Object:-
Width = 3.66
Height = 1.83
Depth = 2.438
Model/Object Url:-
https://drive.google.com/open?id=1uDDlrTIu5iSRJ0cgp70WFo7Dz0hCUz9D
Source Code:-
import UIKit
import ARKit
import SceneKit
class ViewController: UIViewController {
#IBOutlet weak var sceneView: ARSCNView!
private let configuration = ARWorldTrackingConfiguration()
private var node: SCNNode!
//MARK: - Life cycle
override func viewDidLoad() {
super.viewDidLoad()
self.sceneView.showsStatistics = false
self.sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints]
self.sceneView.automaticallyUpdatesLighting = false
self.sceneView.delegate = self
self.addTapGesture()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
configuration.planeDetection = .horizontal
self.sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.sceneView.session.pause()
}
//MARK: - Methods
func addObject(hitTestResult: ARHitTestResult) {
let scene = SCNScene(named: "art.scnassets/Cube.obj")!
let modelNode = scene.rootNode.childNodes.first
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.3.x,
hitTestResult.worldTransform.columns.3.y,
hitTestResult.worldTransform.columns.3.z)
let scale = 1
modelNode?.scale = SCNVector3(scale, scale, scale)
self.node = modelNode
self.sceneView.scene.rootNode.addChildNode(modelNode!)
let lightNode = SCNNode()
lightNode.light = SCNLight()
lightNode.light?.type = .omni
lightNode.position = SCNVector3(x: 0, y: 10, z: 20)
self.sceneView.scene.rootNode.addChildNode(lightNode)
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light?.type = .ambient
ambientLightNode.light?.color = UIColor.darkGray
self.sceneView.scene.rootNode.addChildNode(ambientLightNode)
}
private func addTapGesture() {
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(didTap(_:)))
self.sceneView.addGestureRecognizer(tapGesture)
}
#objc func didTap(_ gesture: UIPanGestureRecognizer) {
let tapLocation = gesture.location(in: self.sceneView)
let results = self.sceneView.hitTest(tapLocation, types: .featurePoint)
guard let result = results.first else {
return
}
let translation = result.worldTransform.translation
guard let node = self.node else {
self.addObject(hitTestResult: result)
return
}
node.position = SCNVector3Make(translation.x, translation.y, translation.z)
self.sceneView.scene.rootNode.addChildNode(self.node)
}
}
extension float4x4 {
var translation: SIMD3<Float> {
let translation = self.columns.3
return SIMD3<Float>(translation.x, translation.y, translation.z)
}
}
GIF of the Problem:-
Video URL of the Problem:-
https://drive.google.com/open?id=1E4euZ0ArEtj2Ffto1pAOfVZocV08EYKN
Approaches Tried:-
Tried to place the object at the origin
modelNode?.position = SCNVector3(0, 0, 0)
Tried to place the object at some distance away from the device camera
modelNode?.position = SCNVector3(0, 0, -800)
Tried with the different combinations of worldTransform/localTransform columns
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.3.x, hitTestResult.worldTransform.columns.3.y, hitTestResult.worldTransform.columns.3.z)
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.2.x, hitTestResult.worldTransform.columns.2.y, hitTestResult.worldTransform.columns.2.z)
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.1.x, hitTestResult.worldTransform.columns.1.y, hitTestResult.worldTransform.columns.1.z)
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.1.x, hitTestResult.worldTransform.columns.2.y, hitTestResult.worldTransform.columns.3.z)
modelNode?.position = SCNVector3(hitTestResult.localTransform.columns.3.x, hitTestResult.localTransform.columns.3.y, hitTestResult.localTransform.columns.3.z)
But still of no luck. It still appears to be moving with the device camera and not stuck to a position where it has been placed.
Expected Result:-
Object should be of actual size (Scale should be of 1.0). Their should be no reduction in the scale value.
Once placed at a particular position it should not move with the movement of the device camera.
Object can be seen from all angles with the movement of the device camera without any change in object position.
Unlike stated in the accepted answer, the issue is probably not about the tracking quality or a bug in the model. It looks like the model is not correctly placed on top of the ground, probably due to a mispositioned pivot point, and some part of the model stays under ground. So when you move the camera, since the part under ground is not occluded by the floor, it looks like it is shifting. Have a look at this picture:
The pivot points of the models provided by Apple are positioned correctly so that when it is placed on top of a plane on the ground, its parts stay above ground.
If you correctly position the pivot point of the model, it should work correctly, independent of the model type.
I found out the root cause of the issue. The issue was related to the Model which I was using for AR. When, I replaced the model with the one provided in this link:- https://developer.apple.com/augmented-reality/quick-look/. I was not facing any issues. So, if anyone face such type of issues in future I would recommend to use any of the model provided by Apple to check if the issue persists with it or not.
I experienced the same issue.
Whenever I try to change anything in our Model of .usdz type (which is actually an Encrypted and compressed type) we cannot edit or change anything in it. If I edit or change a little position then it behaves same way as highlighted in the question.
To handle this issue, I just moved the old model .usdz to trash and and copied the original file (again) to XCode and then it worked.
ARKit and ARCore has the feature to estimate the ambient light intensity and color for realistic rendering.
ARKit: https://developer.apple.com/documentation/arkit/arlightestimate?language=objc
ARCore: https://developers.google.com/ar/reference/java/arcore/reference/com/google/ar/core/LightEstimate#getColorCorrection(float[],%20int)
They both expose an ambient intensity and an ambient color. In ARKit, the color is in degrees kelvin while in ARCore it is RGB color correction.
Question 1: What's the difference between kelvin and color correction and how can they be applied to rendering?
Question 2: What's the algorithm to estimate the light intensity and color from camera frames? Are there existing code or research papers we can refer to if we want to implement it ourselves?
Question 2 for ARCore:
Here is a research paper on How Environmental HDR works
Here is a short summary about environmental HDR in ARCore + Sceneform
Hope it helps you in your search :)
Providing you have added a light node called 'light' with a SCNLight attached to it in your "ship.scn" SCNScene and that your ViewController conforms to ARSessionDelegate so you can get the light estimate per frame:
class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
configuration.isLightEstimationEnabled = true
sceneView.session.run(configuration)
sceneView.session.delegate = self
}
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let lightEstimate = frame.lightEstimate,
let light = sceneView.scene.rootNode.childNode(withName: "light", recursively: false)?.light else {return}
light.temperature = lightEstimate.ambientColorTemperature
light.intensity = lightEstimate.ambientIntensity
}
}
As a result, if you dim the lights in your room, SceneKit will dim the virtual light too .
I am building a ship game using swift. The objective is to avoid the incoming stones and score as many points as you can as the level increases. The stones come in opposite direction to hit the ship.But I am unable to detect collisions between ship and stone, stone passes through the ship.The ship can move to the left or to the right.
I used rect1.interects(rect2) for intersection.
Thank You.
here is ViewController.swift
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var moveWater: MovingWater!
var boat:UIImageView!
var stone:UIImageView!
var boatLeftRight : UILongPressGestureRecognizer!
var tapTimer:Timer!
var leftM:UInt32 = 55
var rightM:UInt32 = 250
var leftS:UInt32 = 35
var rightS:UInt32 = 220
func startGame() {
boat = UIImageView(image: UIImage(named: "boat"))
boat.frame = CGRect(x: 0, y: 0, width: 60, height: 90)
boat.frame.origin.y = self.view.bounds.height - boat.frame.size.height - 10
boat.center.x = self.view.bounds.midX
self.view.insertSubview(boat, aboveSubview: moveWater)
boatLeftRight = UILongPressGestureRecognizer(target: self, action: #selector(ViewController.leftRight(tap:)))
boatLeftRight.minimumPressDuration = 0.001
moveWater.addGestureRecognizer(boatLeftRight)
}
func leftRight(tap:UILongPressGestureRecognizer) {
if tap.state == UIGestureRecognizerState.ended {
if (tapTimer != nil) {
self.tapTimer.invalidate()
}
} else if tap.state == UIGestureRecognizerState.began {
let touch = tap.location(in: moveWater)
if touch.x > moveWater.frame.midX {
tapTimer = Timer.scheduledTimer(timeInterval: TimeInterval(0.005), target: self, selector: #selector(ViewController.moveBoat(time:)), userInfo: "right", repeats: true)
} else {
tapTimer = Timer.scheduledTimer(timeInterval: TimeInterval(0.005), target: self, selector: #selector(ViewController.moveBoat(time:)), userInfo: "left", repeats: true)
}
}
}
func moveBoat(time:Timer) {
if let d = time.userInfo as? String! {
var bot2 = boat.frame
if d == "right" {
if bot2.origin.x < CGFloat(rightM) {
bot2.origin.x += 2
}
} else {
if bot2.origin.x > CGFloat(leftM) {
bot2.origin.x -= 2
}
}
boat.frame = bot2
}
}
func movingStone() {
stone = UIImageView(image: UIImage(named: "stones.png"))
var stone2 = leftS + arc4random() % rightS
stone.bounds = CGRect(x:10, y:10, width:81.0, height:124.0)
stone.contentMode = .center;
stone.layer.position = CGPoint(x: Int(stone2), y: 10)
stone.transform = CGAffineTransform(rotationAngle: 3.142)
self.view.insertSubview(stone, aboveSubview: moveWater)
UIView.animate(withDuration: 5, delay: 0, options: UIViewAnimationOptions.curveLinear, animations: { () -> Void in
self.stone.frame.origin.y = self.view.bounds.height + self.stone.frame.height + 10
}) { (success:Bool) -> Void in
self.stone.removeFromSuperview()
self.movingStone()
}
}
func update() {
if(boat.bounds.intersects(stone.bounds)) {
boat.image = //set new image
}
}
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
moveWater.backgroundStart()
startGame()
movingStone()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
I saw your own answer. This is basic collision detection indeed. You should still have a look at SpriteKit. I see that you are using a Timer, which is not the best way to go since Timers will give you performance issues in the long run. It's a common mistake when you start game developement.
Maybe you think that you can ensure a frame rate by setting a very fast timer. The thing is that timers are not consistent and they have low priority too. This means that your timer call will be delayed if something more important happens in the background. If your movement is based on that forced refresh rate, it will become choppy very quickly. Also the code you run in the timer might get faster or slower depending on the logic you are using.
SpriteKit provides you with an update function that is run every frame and that let's you know about the current system time at every frame. By keeping track of that value, you can then calculate how much time went between two frames and you can then scale your movement accordingly to compensate for the time difference between two frames.
On top of that SpriteKit offers you a bunch of options for collision detection and movement. It integrates a very well made Physics Engine and collision detection system. It will also do collision detection on complex shapes, apply forces to the bodies, etc.
I strongly suggest you follow the link to Ray Wenderlich's website given to you in the answer above. If you have the budget, you might also want to buy their book on how to make 2D games for Apple devices. I've read that cover to cover and I can say I love it. I can now do my own stuff with SpriteKit and it's also a very good starter for newcomers to swift.
Any reason you choose UIKit to make your game?
If you make a game you should really be using SpriteKit instead of UIKit.
Check google and youtube for SpriteKit tutorials, there is loads.
A really good start is this one that teaches you the basics of the SpriteKit Scene editor and how to do collisions etc.
https://www.raywenderlich.com/118225/introduction-sprite-kit-scene-editor
I recommend that you do not continue like this.
Hope this helps
I fixed this myself. It was easy and without any SpriteKit, I have detected collisions.
func intersectsAt(tap2 : Timer) {
var f1 : CGRect!
var f2 : CGRect!
f1 = boat.layer.presentation()?.frame
f2 = stone.layer.presentation()?.frame
if(f1.intersects(f2)) {
stopGame()
}
}