Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have an app that takes all the images of the user (all the Assets from the Photo app). After that the app should run throw all the images and detect faces and return their facial landmarks, and then look in the database to see if there is any friend with the same landmark (recognizing friend faces), similar to what Facebook do on moments app and on the web. The app will then show all the photos that the friend appear in them. Important part of my app is the user privacy, so I would like to keep the entire process on the device, and not sending it to online service. Another benefit of keeping it on the device is that every user in my app can have thousands of images, and working with external service will be expansive and might low down the performance (if every image need to be sent to the server).
From the research that I done there are many online services (but they don't fit my requirements - keeping the process offline). There is also the CIDector that detect the faces, and then you can return few features such as eye location and mouth location (which I don't believe that is good enough for reliable recognition). I also heard about Luxand, openCV, and openFace, which are all on device recognition, but are C++ class, which make it difficult to integrate with swift project (the documentation are not very good, and don't explain how do integrate it to your project and how to perform face recognition on swift).
So my question is if there is any way to perform face detection that return the facial landmarks on the device?
If not is there any other way or service I could use.
Also if there is any efficient and fast way to perform face detection and recognition, if a user could have thousand of images.
By the way, I am in early stage of development and I am looking for free services that I could use for the development stage.
iOS have native face detection in CoreImage framework that works pretty cool. You can as well detect eyes etc. Just check out this code, there's show how you can work with it.
func detect() {
guard let personciImage = CIImage(image: personPic.image!) else {
return
}
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector.featuresInImage(personciImage)
// converting to other coordinate system
let ciImageSize = personciImage.extent.size
var transform = CGAffineTransformMakeScale(1, -1)
transform = CGAffineTransformTranslate(transform, 0, -ciImageSize.height)
for face in faces as! [CIFaceFeature] {
print("Found bounds are \(face.bounds)")
// calculating place for faceBox
var faceViewBounds = CGRectApplyAffineTransform(face.bounds, transform)
let viewSize = personPic.bounds.size
let scale = min(viewSize.width / ciImageSize.width,
viewSize.height / ciImageSize.height)
let offsetX = (viewSize.width - ciImageSize.width * scale) / 2
let offsetY = (viewSize.height - ciImageSize.height * scale) / 2
faceViewBounds = CGRectApplyAffineTransform(faceViewBounds, CGAffineTransformMakeScale(scale, scale))
faceViewBounds.origin.x += offsetX
faceViewBounds.origin.y += offsetY
let faceBox = UIView(frame: faceViewBounds)
faceBox.layer.borderWidth = 3
faceBox.layer.borderColor = UIColor.redColor().CGColor
faceBox.backgroundColor = UIColor.clearColor()
personPic.addSubview(faceBox)
if face.hasLeftEyePosition {
print("Left eye bounds are \(face.leftEyePosition)")
}
if face.hasRightEyePosition {
print("Right eye bounds are \(face.rightEyePosition)")
}
}
}
Related
I've had some barcode scanning code in my iOS app for many years now. Recently, users have begun complaining that it doesn't work with an iPhone 13 Pro.
During investigation, it seemed that I should be using the built in triple camera if available. Doing that did fix it for iPhone 13 Pro but subsequently broke it for iPhone 12 Pro, which seemed to be working fine with the previous code.
How are you supposed to choose a suitable camera for all devices? It seems bizarre to me that Apple has suddenly made it so difficult to use this previously working code.
Here is my current code. The "fallback" section is what the code has used for years.
_session = [[AVCaptureSession alloc] init];
// Must use macro camera for barcode scanning on newer devices, otherwise the image is blurry
if (#available(iOS 13.0, *)) {
AVCaptureDeviceDiscoverySession * discoverySession =
[AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:#[AVCaptureDeviceTypeBuiltInTripleCamera]
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
if (discoverySession.devices.count == 0) {
// no BuiltInTripleCamera
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
} else {
_device = discoverySession.devices.firstObject;
}
} else {
// Fallback on earlier versions
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
The accepted answer works but not all the time. Because lenses have different minimum focus distance it is harder for the device to focus on small barcodes because you have to put you device too close (before the minimum focus distance). This way it will never autofocus on small barcodes. It used to work on older lenses where autofocus was 10-12 cm but newer lenses especially those on iPhone 14 Pros that have the distance 20cm will be problematic.
The solution is to use ideally AVCaptureDeviceTypeBuiltInWideAngleCamera and setting videoZoomFactor on the AVCaptureDevice to zoom in little bit so the barcode will be nicely focused. The value should be calculated based on the input video properties and minimum size of barcode.
For details please refer to this WWDC 2019 video where they address exactly this issue https://developer.apple.com/videos/play/wwdc2021/10047/?time=133.
Here is implementation of class that sets zoom factor on a device that works for me. You can instantiate this class providing your device instance and call applyAutomaticZoomFactorIfNeeded() just before you are about to commit your capture session configuration.
///
/// Calling this method will automatically zoom the device to increase minimum focus distance. This distance appears to be problematic
/// when scanning barcodes too small or if a device's minimum focus distance is too large (like on iPhone 14 Pro and Max - 20cm, iPhone 13 Pro - 15 cm, older iPhones 12 or less.). By zooming
/// the input the device will be able to focus on a preview and complete the scan more easily.
///
/// - See https://developer.apple.com/videos/play/wwdc2021/10047/?time=133 for more detailed explanation and
/// - See https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
/// for implementation instructions.
///
#available(iOS 15.0, *)
final class DeviceAutomaticVideoZoomFactor {
enum Errors : Error {
case minimumFocusDistanceUnknown
case deviceLockFailed
}
private let device: AVCaptureDevice
private let minimumCodeSize: Float
init(device: AVCaptureDevice, minimumCodeSize: Float) {
self.device = device
self.minimumCodeSize = minimumCodeSize
}
///
/// Optimize the user experience for scanning QR codes down to smaller sizes (determined by `minimumCodeSize`, for example 2x2 cm).
/// When scanning a QR code of that size, the user may need to get closer than the camera's minimum focus distance to fill the rect of interest.
/// To have the QR code both fill the rect and still be in focus, we may need to apply some zoom.
///
func applyAutomaticZoomFactorIfNeeded() throws {
let deviceMinimumFocusDistance = Float(self.device.minimumFocusDistance)
guard deviceMinimumFocusDistance != -1 else {
throw Errors.minimumFocusDistanceUnknown
}
Logger.logIfStaging("Video Zoom Factor", "using device: \(self.device)")
Logger.logIfStaging("Video Zoom Factor", "device minimum focus distance: \(deviceMinimumFocusDistance)")
/*
Set an inital square rect of interest that is 100% of the view's shortest side.
This means that the region of interest will appear in the same spot regardless
of whether the app starts in portrait or landscape.
*/
let formatDimensions = CMVideoFormatDescriptionGetDimensions(self.device.activeFormat.formatDescription)
let rectOfInterestWidth = Double(formatDimensions.height) / Double(formatDimensions.width)
let deviceFieldOfView = self.device.activeFormat.videoFieldOfView
let minimumSubjectDistanceForCode = self.minimumSubjectDistanceForCode(fieldOfView: deviceFieldOfView,
minimumCodeSize: self.minimumCodeSize,
previewFillPercentage: Float(rectOfInterestWidth))
Logger.logIfStaging("Video Zoom Factor", "minimum subject distance: \(minimumSubjectDistanceForCode)")
guard minimumSubjectDistanceForCode < deviceMinimumFocusDistance else {
return
}
let zoomFactor = deviceMinimumFocusDistance / minimumSubjectDistanceForCode
Logger.logIfStaging("Video Zoom Factor", "computed zoom factor: \(zoomFactor)")
try self.device.lockForConfiguration()
self.device.videoZoomFactor = CGFloat(zoomFactor)
self.device.unlockForConfiguration()
Logger.logIfStaging("Video Zoom Factor", "applied zoom factor: \(self.device.videoZoomFactor)")
}
private func minimumSubjectDistanceForCode(fieldOfView: Float,
minimumCodeSize: Float,
previewFillPercentage: Float) -> Float {
/*
Given the camera horizontal field of view, we can compute the distance (mm) to make a code
of minimumCodeSize (mm) fill the previewFillPercentage.
*/
let radians = self.degreesToRadians(fieldOfView / 2)
let filledCodeSize = minimumCodeSize / previewFillPercentage
return filledCodeSize / tan(radians)
}
private func degreesToRadians(_ degrees: Float) -> Float {
return degrees * Float.pi / 180
}
}
Thankfully with the help of reddit I was able to figure out that the solution is simply to replace
AVCaptureDeviceTypeBuiltInTripleCamera
with
AVCaptureDeviceTypeBuiltInWideAngleCamera
This is a long question so I wanted to put a TL;DR on top:
I want to track QR codes via on of two methods: image tracking by cropping them upon detection, or placing anchors with raycasting. Both of these methods fail when the phone is in portrait mode. Camera source is an ARSession, SceneKit and RealityKit not used. There's only ARKit. What to do?
I am currently working on an application with Swift in which I try to render some stuff on a server, transmit the video to iPhone and display it on screen using a MTKView. I only needed a custom Meal shader to apply some complex calculations to received frames, so I did not use SceneKit or RealityKit. I only have ARSession from ARKit and a Metal view here, and up to this point everything works fine.
I am able to do image tracking at this point. However, I want to apply this behaviour to QR codes. What I want is to detect a QR code (multiple if possible) and then track it just like images. Since I don't have the QR code as ARReferenceImages beforehand like normal image tracking, I was left with two options:
Option 1: Using raycast(_:) on ARSession
This is probably the right way to do it. However, for this I need to activate both plane tracking options on ARSession, which then creates many anchors and managing them with image tracking becomes harder. This is not the actual problem though. Actual problem is that when the phone is in landscape mode, raycasting works as intended. When phone goes into portrait mode, even if I pass the frame in correct orientation it misses everything and hit test results return empty. I am not using hitTest(_:) because it is deprecated.
I want to explain the "correct orientation" thing here before going into second option. ARSession is capturing frames and I am able to check each frame through didUpdate delegate function of the session. When I read the pixel buffer out of the frame using frame.capturedImage and turn it into a CIImage, the image is always in landscape mode (width > height). Doesn't matter if the phone is in portrait mode or not. So whenever I want to pass this image, I am using oriented(.right) for portrait and oriented(.up) for landscape. I got that idea from another question asked about QR bounding box, and so far it is the best option (but not good enough). Just want to note that when I tried raycasting, I tried it with the image size, not screen size (screen size = my Metal view size because it is fullscreen) since the image is larger than the screen in reality. I am able to see this if I put a breakpoint and quicklook my CIImage created from current camera frame.
Option 2: Cropping the QR and treating it as image tracking
This is another approach which I am currently working on. Algorithm is simple: check every frame with Vision. If there are detected QR codes, read their data first. If that data matches with an existing QR, then re-read it if the cropped QR size is larger than existing one. If not, do nothing. Then use this cropped QR image for tracking QR as an image. At this point we would have the data already so no problems here.
However, I tried many times to do the proper transformation explained here in the answer. Again, I think I am able to transform normalized bounding box into a real rect which can correctly crop the image. Yet, as it is in raycasting, works perfectly only if the phone is in landscape position. When in portrait it works good enough ONLY IF the phone is really close to QR code and it is centered on the screen.
For related code, I have this in my View controller:
private var ciContext: CIContext = CIContext.init(options: nil)
private var sequenceHandler: VNImageRequestHandler?
And then I have this code to extract QR codes from CIImage:
func extractQrCode(image: CIImage) -> [VNBarcodeObservation]? {
self.sequenceHandler = VNImageRequestHandler(ciImage: image)
let barcodeRequest = VNDetectBarcodesRequest()
barcodeRequest.symbologies = [.QR]
try? self.sequenceHandler?.perform([barcodeRequest])
guard let results = barcodeRequest.results else {
return nil
}
return results
}
An this is the delegate that checks and operates on every frame (code currently for Option 2):
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let rotImg = self.renderer?.getInterfaceOrientation() == .portrait ? CIImage(cvPixelBuffer: frame.capturedImage).oriented(.right) : CIImage(cvPixelBuffer: frame.capturedImage)
if let barcodes = self.extractQrCode(image: rotImg) {
for barcode in barcodes {
guard let payload = barcode.payloadStringValue else { continue }
var rect = CGRect()
rect = VNImageRectForNormalizedRect(barcode.boundingBox.botToTop(), Int(rotImg.extent.width), Int(rotImg.extent.height))
let existingQR = TrackedImagesManager.imagesToTrack.filter{ $0.isQR && $0.QRData == payload}.first
if ((rect.size.width < 800 || rect.size.height < 800 || abs(rect.size.height - rect.size.width) > 32) && existingQR == nil) {
DispatchQueue.main.async {
self.showToastMessage(message: "Please get closer to the QR code and try centering it on your screen.", font: UIFont.systemFont(ofSize: 18), duration: 3)
}
continue
} else if (existingQR != nil) {
if (rect.width > existingQR?.originalImage?.size.width ?? 999) {
let croppedImg = rotImg.cropped(to: rect)
let croppedCgImage = self.ciContext.createCGImage(croppedImg, from: croppedImg.extent)!
let trackImg = UIImage(cgImage: croppedCgImage)
existingQR?.originalImage = trackImg
existingQR?.image = ARReferenceImage(croppedCgImage, orientation: .up, physicalWidth: 0.1)
} else {
continue
}
} else if rect.width != 0 {
let croppedImg = rotImg.cropped(to: rect)
let croppedCgImage = self.ciContext.createCGImage(croppedImg, from: croppedImg.extent)!
let trackImg = UIImage(cgImage: croppedCgImage)
TrackedImagesManager.imagesToTrack.append(TrackedImage(id: 9, type: 1, image: ARReferenceImage(croppedCgImage, orientation: .up, physicalWidth: 0.1), originalImage: trackImg, isQR: true, QRData: payload))
print("qr norm rect: \(barcode.boundingBox) \n qr rect: \(rect) \nqr data: \(payload) \nqr hittestres: ")
}
}
}
}
Finally, for the transformation, I have this extension (tried various ways, this is the best so far):
extension CGRect {
func botToTop() -> CGRect {
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
return self.applying(transform)
}
}
So for both options I need some advice to make things right. Android side of the same thing is implemented as in Option 2, but Android returns a nicely cropped QR code upon detection. We don't have that. What do I do now?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 months ago.
Improve this question
I'm making a zoom control for my app, and I'd like to make it advanced, like in default camera app from Apple: sample
I did some research, and still have some questions about it.
Is it possible to get focal length value programmaticaly? There are labels like 13mm, 26mm for different back cameras in the default app, but there is no such property on AVCaptureDevice. (It is probably needed to determine zoom values, see the next question)
How can we determine zoom values to display in UI? The thing is that AVCaptureDevice's minZoomFactor always starts from 1x, but in the camera app we can see that on devices with ultrawide camera the scale starts at 0.5x, so there should be some way to map this values onto each other. As I understand, Apple considers "usual" back camera as default (that is, 1x), and all other values are relative to it: 13mm is 0.5 * 26mm, so the first value on iphone 13 pro zoom control will be 0.5x, the second value is the "default" and is 1x (26mm), and telephoto camera is 77mm, so the third value is 3x (26mm * 3 = 78mm ~= 77mm). Please clarify how it is actually calculated and correct me if my assumption is wrong.
What is the correct way to get max zoom value? If I try AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInTripleCamera], mediaType: .video, position: .back).devices.first!.maxAvailableVideoZoomFactor, it says 123.75 (iphone 13 pro), but in the default camera app max zoom value is 15x. Why is it exactly 15x and where does it come from? (My assumption is that max digital zoom for all iPhones equals to 5x, so on 13 Pro telephoto camera zooms 3x as "usual" camera, thus we get 3x * 5x = 15x max zoom)
Is there any universal way to get "the best" (i.e with all features) camera? For example, now I can specify [.builtInTripleCamera, .builtInDualWideCamera, .builtInDualCamera, .builtInWideAngleCamera] for discovery session and pick the first item in devices array, but if Apple will release, lets say, some ".builtInQuadrupleCamera" in a couple of years, this code will have to be modified, because it won't include it automatically.
To sum up (TL;DR version):
As I suppose, final code should look something like this:
let deviceTypes: [AVCaptureDevice.DeviceType]
if #available(iOS 13, *) {
deviceTypes = [.builtInTripleCamera, .builtInDualWideCamera, .builtInDualCamera, .builtInWideAngleCamera]
} else {
deviceTypes = [.builtInDualCamera, .builtInWideAngleCamera]
}
let session: AVCaptureDevice.DiscoverySession(
deviceTypes: deviceTypes,
mediaType: .video,
position: .back
)
if let device = session.devices.first {
device.getUIZoomValues()
}
extension AVCaptureDevice {
func getUIZoomValues() -> [Float] {
// Hardcode. Seems like all iPhones limit digital zoom to 5x
let maxDigitalZoom: Float = 5
// fallback for old iOS versions
guard #available(iOS 13, *) else { return [1, maxDigitalZoom] }
let uiZoomValues: [Float]
let factors = virtualDeviceSwitchOverVideoZoomFactors
switch deviceType {
case .builtInTripleCamera, .builtInDualWideCamera:
// ultrawide camera is available - starting zoom from 0.5x
let firstZoom: Float = 1.0 / factors.first!.floatValue
uiZoomValues = [firstZoom] + factors.map { $0.floatValue * firstZoom } + [firstZoom * factors.last!.floatValue * maxDigitalZoom]
case .builtInDualCamera:
// no ultrawide. Starting from 1x
uiZoomValues = [1.0] + factors.map { $0.floatValue } + [factors.last!.floatValue * maxDigitalZoom]
case .builtInWideAngleCamera:
// just a single "usual" camera.
uiZoomValues = [1, maxDigitalZoom]
default:
fatalError("this should not happen on a real device")
}
return uiZoomValues
}
}
2 main concerns about this code:
1 - We have to hardcode maxDigitalZoom. Is there any way to get it programmaticaly? Apple states 5x in iPhone specs, and there is AVCaptureDevice.maxAvailableVideoZoomFactor, but those values are different (for example, iPhone 13 pro has 15x in specs vs 123.75x in maxAvailableVideoZoomFactor).
2 - Case builtInDualCamera (iPhone XS Max, for example). All the code above relies on virtualDeviceSwitchOverVideoZoomFactors var, which is available only from iOS 13, but builtInDualCamera is available from iOS 10.2, so what will happen if user has XS Max? Will it work on iOS >= 13 but break on earlier versions? Or it will not work at all?
Questions in order:
I think not
Works for me:
Created dictionary var zoomFactors: [String: CGFloat] = ["1": 1]
Than manage AVCaptureDevice?
I think u can play with getApproximation() to achieve the goal
No
Code Questions:
No. but i think one of the easiest method it's to play with getApproximation() idea
I believe it will be crash
Although I understand the theory behind image compositing, I haven't dealt much with hardware acceleration and I'm running into implementation issues on iOS (9.2, iPhone 6S). My project is to sequentially composite a large number (20, all the way to hundreds) of large images (12 megapixel) on top of each other at decreasing opacities, and I'm looking for advice as to the best framework or technique. I know there must be a good, hardware accelerated, destructive compositing tool capable of handling large files on iOS, because I can perform this task in Safari in an HTML Canvas tag, and load this page in Safari on the iPhone at nearly the same blazing speed.
This can be a destructive compositing task, like painting in Canvas, so I shouldn't have memory issues as the phone will only have to store the current result up to that point. Ideally, I'd like floating point pixel components, and I'd also like to be able to see the progress on screen.
Core Image has filters that seem great, but they are intended to operate losslessly on one or two pictures and return one result. I can feed that result into the filter again with the next image, and so on, but since the filter doesn't render immediately, this chaining of filters runs me out of memory after about 60 images. Rendering to a Core Graphics image object and reading back in as a Core Image object after each filter doesn't help either, as that overloads the memory even faster.
Looking at the documentation, there are a number of other ways for iOS to leverage the GPU - CALayers being a prime example. But I'm unclear if that handles pictures larger than the screen, or is only intended for framebuffers the size of the screen.
For this task - to leverage the GPU to store a destructively composited "stack" of 12 megapixel photos, and add an additional one on top at a specified opacity, repeatedly, while outputing the current contents of the stack scaled down to the screen - what is the best approach? Can I use an established framework/technique, or am I better of diving into OpenGL and Metal myself? I know the iPhone has this capability, I just need to figure out how to leverage it.
This is what I've got so far. Profiler tells me the rendering takes about 350ms, but I run out of memory if I increase to 20 pics. If I don't render after each loop, I can increase to about 60 pics before I run of out memory.
var stackBuffer: CIImage!
var stackRender: CGImage!
var uiImage: UIImage!
let glContext = EAGLContext(API: .OpenGLES3)
let context = CIContext(EAGLContext: glContext)
// Preload list of 10 test pics
var ciImageArray = Array(count: 10, repeatedValue: CIImage.emptyImage())
for i in 0...9 {
uiImage = UIImage(named: String(i) + ".jpg")!
ciImageArray[i] = CIImage(image: uiImage)!
}
// Put the first image in the buffer
stackBuffer = ciImageArray[0]
for i in 1...9 {
// The next image will have an opacity of 1/n
let topImage = ciImageArray[i]
let alphaTop = topImage.imageByApplyingFilter(
"CIColorMatrix", withInputParameters: [
"inputAVector" : CIVector(x:0, y:0, z:0, w:1/CGFloat(i + 1))
])
// Layer the next image on top of the stack
let filter = CIFilter(name: "CISourceOverCompositing")!
filter.setValue(alphaTop, forKey: kCIInputImageKey)
filter.setValue(stackBuffer, forKey: kCIInputBackgroundImageKey)
// Render the result, and read back in
stackRender = context.createCGImage(filter.outputImage!, fromRect: stackBuffer.extent)
stackBuffer = CIImage(CGImage: stackRender)
}
// Output result
uiImage = UIImage(CGImage: stackRender)
compositeView.image = uiImage
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In Sprite Kit using Swift, I am trying to build a chess board (in actuality, a chess-like board / tile grid). So in general, how should I go about creating a square grid board?
I have done a lot of research and have studied some examples of the high-level concept of chess-like boards through multi-dimensional arrays but it still doesn't really explain how to VISUALLY represent it in Sprite Kit and more importantly, how to map the visual representation to the letter+number representation in a multi-dimensional array...
Any thoughts?
If anyone could answer at least one point/part in the above question, it would be greatly appreciated! Big thank you in advanced!
One way to draw a chessboard in SpriteKit is to add alternating white and black sprite nodes at the appropriate locations. Here's an example of how to do that.
override func didMoveToView(view: SKView) {
self.scaleMode = .ResizeFill
// Draw the board
drawBoard()
// Add a game piece to the board
if let square = squareWithName("b7") {
let gamePiece = SKSpriteNode(imageNamed: "Spaceship")
gamePiece.size = CGSizeMake(24, 24)
square.addChild(gamePiece)
}
if let square = squareWithName("e3") {
let gamePiece = SKSpriteNode(imageNamed: "Spaceship")
gamePiece.size = CGSizeMake(24, 24)
square.addChild(gamePiece)
}
}
This method draws the chessboard.
func drawBoard() {
// Board parameters
let numRows = 8
let numCols = 8
let squareSize = CGSizeMake(32, 32)
let xOffset:CGFloat = 50
let yOffset:CGFloat = 50
// Column characters
let alphas:String = "abcdefgh"
// Used to alternate between white and black squares
var toggle:Bool = false
for row in 0...numRows-1 {
for col in 0...numCols-1 {
// Letter for this column
let colChar = Array(alphas)[col]
// Determine the color of square
let color = toggle ? SKColor.whiteColor() : SKColor.blackColor()
let square = SKSpriteNode(color: color, size: squareSize)
square.position = CGPointMake(CGFloat(col) * squareSize.width + xOffset,
CGFloat(row) * squareSize.height + yOffset)
// Set sprite's name (e.g., a8, c5, d1)
square.name = "\(colChar)\(8-row)"
self.addChild(square)
toggle = !toggle
}
toggle = !toggle
}
}
This method returns the square node with the specified name
func squareWithName(name:String) -> SKSpriteNode? {
let square:SKSpriteNode? = self.childNodeWithName(name) as SKSpriteNode?
return square
}