I'm making a game and built a custom NSObject sprite class for it. What I want to do is when my view loads add a visual representation of my object. I don't think it should be that hard but this is my first time trying to get a custom class to render on the screen. Also I want to do this without using Sprite Kit. Here is a stripped down version of what my sprite class looks like.
class sprite: NSObject {
var img: UIImage = UIImage(named: "koalio_stand")!
var width = 10
var height = 10
var x, y: Int!
var velocity: Int!
}
Related
I'm making a simple game application for IOS devices, loosely based from the book 'swift game development'. I have created a protocol which I use as a template for creating a class for each type of in game object. A platform for the player to jump on has the following class based on the protocol
import SpriteKit
class GrassyPlatform: SKSpriteNode, GameSprite {
var textureAtlas:SKTextureAtlas = SKTextureAtlas(named: "Enviroment")
var initialSize: CGSize = CGSize(width:630, height:44)
init() {
super.init(texture: textureAtlas.textureNamed("GrassPlatform1"), color: .clear, size: initialSize)
self.anchorPoint = CGPoint(x: 0.5, y:0.5)
physicsBody = SKPhysicsBody(rectangleOf: initialSize)
physicsBody?.restitution = 0
self.physicsBody?.isDynamic = false
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
}
I'm using the scene editor to place these objects onto the scene, assigning each object the relevant custom class. Like the one above.
When I run the game the objects position (assigned only from the scene editor) is respected, but the zRotation value is ignored. For example setting the platform in the scene editor as so
But this results in the platform appearing at the correct position but with the default zRotation value and not the one assigned in the game scene.
I can adjust the rotation manually through self.zRotation but this defeats the whole point of using the scene.sks for level design.
Is there away to adjust the zRotation through game scene and if so how?
Thanks
Solved the problem was in a piece of code that I never added to the original post because I 'thought' it was irrelevant! boy was I wrong. Another class handles the different scenes. This code I took from the book and was not 100% on its operation when I added it, before getting caught up in something else
class EncounterManager {
let encounterNames:[String] = ["Level1A", "Level1B"] //A array of all the scenes in the level
var encounters:[SKNode] = [] //each scene is a node
var currentEncounterIndex:Int?
var previousEncounterIndex:Int?
init() {
for encounterFileName in encounterNames { //Loop all scenes in the scene array
let encounterNode = SKNode() //create a new node for the encounter/scene
if let encounterScene = SKScene(fileNamed: encounterFileName) { //load the encounter to the skscene
for child in encounterScene.children { //Loop through each child node of the skscene
let copyOfNode = type(of: child).init() //copy the node type and initilize to the encounterNode
copyOfNode.position = child.position //copy the position
copyOfNode.zPosition = child.zPosition //copy of zPosition
copyOfNode.name = child.name //copy the name
encounterNode.addChild(copyOfNode) //add child to encounter node
}
}
encounters.append(encounterNode)
//Save initial sprite positions for this encounter
saveSpritePositions(node: encounterNode)
}
}
This takes a copy of each object node and adds it to the game scene. The origonal author never needed to worry about the rotation of objects. So I added this and it worked
copyOfNode.zRotation = child.zRotation //copy of zRotation
Thanks anyway if you looked.
I followed the example of changing filter intensity on video camera from GPUImage2 examples, I am trying to change the filter intensity on a static image with iOS slider control, but it's not changing the intensity.
#IBOutlet weak var renderView: RenderView!
var filterContrast:ContrastAdjustment!
#IBOutlet weak var sliderContrast: UISlider!
let inputImage = UIImage(named:"WID-small.jpg")!
var picture:PictureInput!
// setting dynamic observable property.
dynamic var filterValue = 3.0 {
didSet {
filterContrast.contrast = GLfloat(filterValue)
picture.processImage()
}
}
on viewDidLayoutSubviews
picture = PictureInput(image: inputImage)
filterContrast = ContrastAdjustment();
picture --> filterContrast --> renderView
filterValue = 3; // will call did set property and call processimage from it
on Slider Update
filterValue = Double(nm);
is there any thing wrong, with this approach?
Thanks
Every time your slider value changes, you're creating a new ContrastAdjustment filter instance and attaching it to the picture. Your old filter instance is still there, and the RenderView will ignore any inputs beyond the first one that goes into it. In fact, you should have been seeing warnings on the console telling you about that fact that you're adding too many inputs to the RenderView.
Instead of creating a whole new ContrastAdjustment filter each time, simply save your contrast filter as an instance variable and adjust its contrast property when the slider changes.
I am trying to subclass MKTileOverlay, but am having issues with it not finding property canReplaceMap on object. What am I doing wrong? I go to New, create new class, Subclass of MKTileOverlay and add in the methods the tutorials all say to add, but these simple properties aren't getting found!
Here is the custom class extension for MKTileOverlay that I've been using to overlay the map in MapKit:
class CustomTileOverlay : MKTileOverlay
{
var mapLocation: MKMapPoint
var mapSize: MKMapSize
init(urlTemplate: String, location: MKMapPoint, size: MKMapSize)
{
mapLocation = location
mapSize = size
super.init(urlTemplate: urlTemplate)
}
override var boundingMapRect: MKMapRect {
get {
return MKMapRect(origin: mapLocation, size: mapSize)
}
}
}
The reason for doing an extension is to be able to adjust the boundingMapRect since that's read only in the base class (so if you don't need to adjust it, don't sub-class MKTileOverlay).
Here's the setup for using the Custom Class. I'm pulling the values from a CoreData record I set up for the tile set, but you could hardwire those or get them from wherever fits your app. Since I have polylines overlaying the tiles, I need the last line to be sure the tiles are under the lines, so if you don't have both you won't need that line.
[Declaration...]
private var tileLayer: CustomTileOverlay?
[Later in the code...]
let rectangle = overlayMap.getMapRectangle() // Why I need to sub-class
let mapURL = "file://" + overlayMap.getMapPath() + "/{z}/{x}/{y}.png"
tileLayer = CustomTileOverlay(urlTemplate: mapURL, location: rectangle.origin, size: rectangle.size)
tileLayer?.minimumZ = overlayMap.getMinimumZoom()
tileLayer?.maximumZ = overlayMap.getMaximumZoom()
tileLayer?.canReplaceMapContent = true
tileLayer?.tileSize = overlayMap.getTileSize()
self.mapView.add(tileLayer!)
self.mapView.insert(tileLayer!, at: 0) // Set to lowest z-level to ensure polylines are above map tiles
So I am working on a Breakout app in swift. I currently have a ball, which is a UIView with a cornerRadius = 20.0 to emulate a ball. I also have a paddle, which is another UIView with a smaller cornerRadius = 5.0. I have programmatically made nine red views which are each 50x50 units large. I have collision and motion mechanics for my ball, paddle, and block elements.`var dynamicAnimator: UIDynamicAnimator!
var pushBehavior: UIPushBehavior!
var collisionBehavior: UICollisionBehavior!
var ballDynamicBehavior: UIDynamicItemBehavior!
var paddleDynamicBehavior: UIDynamicItemBehavior!
var blockBehaviors: UIDynamicItemBehavior!
My issue here, is that the ball collides with the blocks, but I don't know how to detect whether or not the ball hit the block, but I do know how to make the views appear and disappear (give the view a backgroundcolor matching the View's color, and remove it from the blockBehaviors. Basically, I want to know how to detect when two views collide via. function or something else.
It would also be awesome if I could also add multiple levels,lol.
A UICollisionBehavior needs a delegate that adopts the UICollisionBehaviorDelegate protocol. This delegate has a method collisionBehavior that is called whenever a collision is detected.
For example:
var collisionBehavior: UICollisionBehavior! // create a UICollisionBehavior as you have done
collisionBehavior.addItem(ball) // add your items to it
collisionBehavior.addItem(block) // (faster to do this in the init step with `items` argument)
collisionBehavior.collisionDelegate = myDelegate // give it a delegate which adopts UICollisionBehaviorDelegate
dynamicAnimator.addBehavior(collisionBehavior) // add the behavior to your animator
Then implement func collisionBehavior for your delegate class. Often people just use the UIViewController itself as the delegate, so the above line would read collisionBehavior.collisionDelegate = self.
See "Making objects respond to collisions" here for a good and short tutorial: http://www.raywenderlich.com/76147/uikit-dynamics-tutorial-swift.
I want to manipulate 2D textures in a 3D SceneKit scene.
Therefore i used this code to get local coordinates:
#IBAction func tap(sender: UITapGestureRecognizer) {
var arr:NSArray = my3dView.hitTest(sender.locationInView(my3dView), options: NSDictionary(dictionary: [SCNHitTestFirstFoundOnlyKey:true]))
var res:SCNHitTestResult = arr.firstObject as SCNHitTestResult
var vect:SCNVector3 = res.localCoordinates}
I have the texture read out from my scene with:
var mat:SCNNode = myscene.rootNode.childNodes[0] as SCNNode
var child:SCNNode = mat.childNodeWithName("ID12", recursively: false)
var geo:SCNMaterial = child.geometry.firstMaterial
var channel = geo.diffuse.mappingChannel
var textureimg:UIImage = geo.diffuse.contents as UIImage
and now i want to draw at the touchpoint to the texture...
how can i do that? how can i transform my coordinate from touch to the texture image?
Sounds like you have two problems. (Without even having used regular expressions. :))
First, you need to get the texture coordinates of the tapped point -- that is, the point in 2D texture space on the surface of the object. You've almost got that right already. SCNHitTestResult provides those with the textureCoordinatesWithMappingChannel method. (You're using localCoordinates, which gets you a point in the 3D space owned by the node in the hit-test result.) And you already seem to have found the business about mapping channels, so you know what to pass to that method.
Problem #2 is how to draw.
You're doing the right thing to get the material's contents as a UIImage. Once you've got that, you could look into drawing with UIGraphics and CGContext functions -- create an image with UIGraphicsBeginImageContext, draw the existing image into it, then draw whatever new content you want to add at the tapped point. After that, you can get the image you were drawing with UIGraphicsGetImageFromCurrentImageContext and set it as the new diffuse.contents of your material. However, that's probably not the best way -- you're schlepping a bunch of image data around on the CPU, and the code is a bit unwieldy, too.
A better approach might be to take advantage of the integration between SceneKit and SpriteKit. This way, all your 2D drawing is happening in the same GPU context as the 3D drawing -- and the code's a bit simpler.
You can set your material's diffuse.contents to a SpriteKit scene. (To use the UIImage you currently have for that texture, just stick it on an SKSpriteNode that fills the scene.) Once you have the texture coordinates, you can add a sprite to the scene at that point.
var nodeToDrawOn: SCNNode!
var skScene: SKScene!
func mySetup() { // or viewDidLoad, or wherever you do setup
// whatever else you're doing for setup, plus:
// 1. remember which node we want to draw on
nodeToDrawOn = myScene.rootNode.childNodeWithName("ID12", recursively: true)
// 2. set up that node's texture as a SpriteKit scene
let currentImage = nodeToDrawOn.geometry!.firstMaterial!.diffuse.contents as UIImage
skScene = SKScene(size: currentImage.size)
nodeToDrawOn.geometry!.firstMaterial!.diffuse.contents = skScene
// 3. put the currentImage into a background sprite for the skScene
let background = SKSpriteNode(texture: SKTexture(image: currentImage))
background.position = CGPoint(x: skScene.frame.midX, y: skScene.frame.midY)
skScene.addChild(background)
}
#IBAction func tap(sender: UITapGestureRecognizer) {
let results = my3dView.hitTest(sender.locationInView(my3dView), options: [SCNHitTestFirstFoundOnlyKey: true]) as [SCNHitTestResult]
if let result = results.first {
if result.node === nodeToDrawOn {
// 1. get the texture coordinates
let channel = nodeToDrawOn.geometry!.firstMaterial!.diffuse.mappingChannel
let texcoord = result.textureCoordinatesWithMappingChannel(channel)
// 2. place a sprite there
let sprite = SKSpriteNode(color: SKColor.greenColor(), size: CGSize(width: 10, height: 10))
// scale coords: texcoords go 0.0-1.0, skScene space is is pixels
sprite.position.x = texcoord.x * skScene.size.width
sprite.position.y = texcoord.y * skScene.size.height
skScene.addChild(sprite)
}
}
}
For more details on the SpriteKit approach (in Objective-C) see the SceneKit State of the Union Demo from WWDC14. That shows a SpriteKit scene used as the texture map for a torus, with spheres of paint getting thrown at it -- whenever a sphere collides with the torus, it gets a SCNHitTestResult and uses its texcoords to create a paint splatter in the SpriteKit scene.
Finally, some Swift style comments on your code (unrelated to the question and answer):
Use let instead of var wherever you don't need to reassign a value, and the optimizer will make your code go faster.
Explicit type annotations (res: SCNHitTestResult) are rarely necessary.
Swift dictionaries are bridged to NSDictionary, so you can pass them directly to an API that takes NSDictionary.
Casting to a Swift typed array (hitTest(...) as [SCNHitTestResult]) saves you from having to cast the contents.