How to make a toggle switch with enums - ios

I am looking into making a toggle switch for the camera input. What I am aiming at is having an enum that I can call like CameraPosition.rear or CameraPosition.front and the camera input changes accordingly.
This is what I got so far:
var currentCameraPosition: CameraPosition?
var frontCameraInput: AVCaptureDeviceInput?
var rearCameraInput: AVCaptureDeviceInput?
var currentCameraInput: AVCaptureDeviceInput?
enum CameraPosition {
case front
case rear
func toggle(){
switch self {
case .front:
currentCameraInput = frontCameraInput
case .rear:
currentCameraInput = rearCameraInput
}
}
}
The problem is that my compiler complains Instance member 'currentCameraInput' cannot be used on type 'Camera'. How should I fix this, or how can I rewrite it?

You can totally use an enum for this if you like. I'd recommend using didSet to toggle rather than having to set your CameraPosition and then separately call toggle. This way, setting your camera position will automatically update your camera input. I'd set it up like this:
var currentCameraPosition: CameraPosition? {
didSet {
if let position = currentCameraPosition {
switch position {
case .front: currentCameraInput = frontCameraInput
case .rear: currentCameraInput = rearCameraInput
}
}
}
}
var frontCameraInput: AVCaptureDeviceInput?
var rearCameraInput: AVCaptureDeviceInput?
var currentCameraInput: AVCaptureDeviceInput?
enum CameraPosition {
case front
case rear
}
You could even ditch the switch and use a ternary operator since you only have 2 options:
var currentCameraPosition: CameraPosition? {
didSet {
if let position = currentCameraPosition {
currentCameraInput = position == .front ? frontCameraInput : rearCameraInput
}
}
}

Related

Number text recognition not highlighting/recognizing text

I am following the apple phone number recognition sample. Normally it creates a red outline around the recognized text. Mine does not seem to do recognizing the text and creating the red outline even though I used their code. The only difference is my view controller class is called "TextScanViewController" where their's is just "ViewController". I went through and made sure that any "ViewControllers" were changed to "TextScanViewController". Am I missing something else that I should change?
Here is what it should look like (when I use the original Apple project) compared to what it is doing (should have red outlines but is not showing them even if the text is perfectly in the center of the rectangle)
Should look like:
Looks like:
There are 5 different swift files I am using (PreviewView, TextScanViewController, VisionViewController, StringUtils, AppDelegate)
TextScanViewController:
import UIKit
import AVFoundation
import Vision
class TextScanViewController: UIViewController {
// MARK: - UI objects
#IBOutlet weak var previewView: PreviewView!
#IBOutlet weak var cutoutView: UIView!
#IBOutlet weak var numberView: UILabel!
var maskLayer = CAShapeLayer()
// Device orientation. Updated whenever the orientation changes to a
// different supported orientation.
var currentOrientation = UIDeviceOrientation.portrait
// MARK: - Capture related objects
private let captureSession = AVCaptureSession()
let captureSessionQueue = DispatchQueue(label: "com.example.apple-samplecode.CaptureSessionQueue")
var captureDevice: AVCaptureDevice?
var videoDataOutput = AVCaptureVideoDataOutput()
let videoDataOutputQueue = DispatchQueue(label: "com.example.apple-samplecode.VideoDataOutputQueue")
// MARK: - Region of interest (ROI) and text orientation
// Region of video data output buffer that recognition should be run on.
// Gets recalculated once the bounds of the preview layer are known.
var regionOfInterest = CGRect(x: 0, y: 0, width: 1, height: 1)
// Orientation of text to search for in the region of interest.
var textOrientation = CGImagePropertyOrientation.up
// MARK: - Coordinate transforms
var bufferAspectRatio: Double!
// Transform from UI orientation to buffer orientation.
var uiRotationTransform = CGAffineTransform.identity
// Transform bottom-left coordinates to top-left.
var bottomToTopTransform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
// Transform coordinates in ROI to global coordinates (still normalized).
var roiToGlobalTransform = CGAffineTransform.identity
// Vision -> AVF coordinate transform.
var visionToAVFTransform = CGAffineTransform.identity
// MARK: - View controller methods
override func viewDidLoad() {
super.viewDidLoad()
// Set up preview view.
previewView.session = captureSession
// Set up cutout view.
cutoutView.backgroundColor = UIColor.gray.withAlphaComponent(0.5)
maskLayer.backgroundColor = UIColor.clear.cgColor
maskLayer.fillRule = .evenOdd
cutoutView.layer.mask = maskLayer
// Starting the capture session is a blocking call. Perform setup using
// a dedicated serial dispatch queue to prevent blocking the main thread.
captureSessionQueue.async {
self.setupCamera()
// Calculate region of interest now that the camera is setup.
DispatchQueue.main.async {
// Figure out initial ROI.
self.calculateRegionOfInterest()
}
}
}
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
// Only change the current orientation if the new one is landscape or
// portrait. You can't really do anything about flat or unknown.
let deviceOrientation = UIDevice.current.orientation
if deviceOrientation.isPortrait || deviceOrientation.isLandscape {
currentOrientation = deviceOrientation
}
// Handle device orientation in the preview layer.
if let videoPreviewLayerConnection = previewView.videoPreviewLayer.connection {
if let newVideoOrientation = AVCaptureVideoOrientation(deviceOrientation: deviceOrientation) {
videoPreviewLayerConnection.videoOrientation = newVideoOrientation
}
}
// Orientation changed: figure out new region of interest (ROI).
calculateRegionOfInterest()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
updateCutout()
}
// MARK: - Setup
func calculateRegionOfInterest() {
// In landscape orientation the desired ROI is specified as the ratio of
// buffer width to height. When the UI is rotated to portrait, keep the
// vertical size the same (in buffer pixels). Also try to keep the
// horizontal size the same up to a maximum ratio.
let desiredHeightRatio = 0.15
let desiredWidthRatio = 0.6
let maxPortraitWidth = 0.8
// Figure out size of ROI.
let size: CGSize
if currentOrientation.isPortrait || currentOrientation == .unknown {
size = CGSize(width: min(desiredWidthRatio * bufferAspectRatio, maxPortraitWidth), height: desiredHeightRatio / bufferAspectRatio)
} else {
size = CGSize(width: desiredWidthRatio, height: desiredHeightRatio)
}
// Make it centered.
regionOfInterest.origin = CGPoint(x: (1 - size.width) / 2, y: (1 - size.height) / 2)
regionOfInterest.size = size
// ROI changed, update transform.
setupOrientationAndTransform()
// Update the cutout to match the new ROI.
DispatchQueue.main.async {
// Wait for the next run cycle before updating the cutout. This
// ensures that the preview layer already has its new orientation.
self.updateCutout()
}
}
func updateCutout() {
// Figure out where the cutout ends up in layer coordinates.
let roiRectTransform = bottomToTopTransform.concatenating(uiRotationTransform)
let cutout = previewView.videoPreviewLayer.layerRectConverted(fromMetadataOutputRect: regionOfInterest.applying(roiRectTransform))
// Create the mask.
let path = UIBezierPath(rect: cutoutView.frame)
path.append(UIBezierPath(rect: cutout))
maskLayer.path = path.cgPath
// Move the number view down to under cutout.
var numFrame = cutout
numFrame.origin.y += numFrame.size.height
numberView.frame = numFrame
}
func setupOrientationAndTransform() {
// Recalculate the affine transform between Vision coordinates and AVF coordinates.
// Compensate for region of interest.
let roi = regionOfInterest
roiToGlobalTransform = CGAffineTransform(translationX: roi.origin.x, y: roi.origin.y).scaledBy(x: roi.width, y: roi.height)
// Compensate for orientation (buffers always come in the same orientation).
switch currentOrientation {
case .landscapeLeft:
textOrientation = CGImagePropertyOrientation.up
uiRotationTransform = CGAffineTransform.identity
case .landscapeRight:
textOrientation = CGImagePropertyOrientation.down
uiRotationTransform = CGAffineTransform(translationX: 1, y: 1).rotated(by: CGFloat.pi)
case .portraitUpsideDown:
textOrientation = CGImagePropertyOrientation.left
uiRotationTransform = CGAffineTransform(translationX: 1, y: 0).rotated(by: CGFloat.pi / 2)
default: // We default everything else to .portraitUp
textOrientation = CGImagePropertyOrientation.right
uiRotationTransform = CGAffineTransform(translationX: 0, y: 1).rotated(by: -CGFloat.pi / 2)
}
// Full Vision ROI to AVF transform.
visionToAVFTransform = roiToGlobalTransform.concatenating(bottomToTopTransform).concatenating(uiRotationTransform)
}
func setupCamera() {
guard let captureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back) else {
print("Could not create capture device.")
return
}
self.captureDevice = captureDevice
// NOTE:
// Requesting 4k buffers allows recognition of smaller text but will
// consume more power. Use the smallest buffer size necessary to keep
// down battery usage.
if captureDevice.supportsSessionPreset(.hd4K3840x2160) {
captureSession.sessionPreset = AVCaptureSession.Preset.hd4K3840x2160
bufferAspectRatio = 3840.0 / 2160.0
} else {
captureSession.sessionPreset = AVCaptureSession.Preset.hd1920x1080
bufferAspectRatio = 1920.0 / 1080.0
}
guard let deviceInput = try? AVCaptureDeviceInput(device: captureDevice) else {
print("Could not create device input.")
return
}
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
// Configure video data output.
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
if captureSession.canAddOutput(videoDataOutput) {
captureSession.addOutput(videoDataOutput)
// NOTE:
// There is a trade-off to be made here. Enabling stabilization will
// give temporally more stable results and should help the recognizer
// converge. But if it's enabled the VideoDataOutput buffers don't
// match what's displayed on screen, which makes drawing bounding
// boxes very hard. Disable it in this app to allow drawing detected
// bounding boxes on screen.
videoDataOutput.connection(with: AVMediaType.video)?.preferredVideoStabilizationMode = .off
} else {
print("Could not add VDO output")
return
}
// Set zoom and autofocus to help focus on very small text.
do {
try captureDevice.lockForConfiguration()
captureDevice.videoZoomFactor = 2
captureDevice.autoFocusRangeRestriction = .near
captureDevice.unlockForConfiguration()
} catch {
print("Could not set zoom level due to error: \(error)")
return
}
captureSession.startRunning()
}
// MARK: - UI drawing and interaction
func showString(string: String) {
// Found a definite number.
// Stop the camera synchronously to ensure that no further buffers are
// received. Then update the number view asynchronously.
captureSessionQueue.sync {
self.captureSession.stopRunning()
DispatchQueue.main.async {
self.numberView.text = string
self.numberView.isHidden = false
}
}
}
#IBAction func handleTap(_ sender: UITapGestureRecognizer) {
captureSessionQueue.async {
if !self.captureSession.isRunning {
self.captureSession.startRunning()
}
DispatchQueue.main.async {
self.numberView.isHidden = true
}
}
}
}
// MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
extension TextScanViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// This is implemented in VisionViewController.
}
}
// MARK: - Utility extensions
extension AVCaptureVideoOrientation {
init?(deviceOrientation: UIDeviceOrientation) {
switch deviceOrientation {
case .portrait: self = .portrait
case .portraitUpsideDown: self = .portraitUpsideDown
case .landscapeLeft: self = .landscapeRight
case .landscapeRight: self = .landscapeLeft
default: return nil
}
}
}
PreviewView:
import Foundation
import UIKit
import AVFoundation
class PreviewView: UIView {
var videoPreviewLayer: AVCaptureVideoPreviewLayer {
guard let layer = layer as? AVCaptureVideoPreviewLayer else {
fatalError("Expected `AVCaptureVideoPreviewLayer` type for layer. Check PreviewView.layerClass implementation.")
}
return layer
}
var session: AVCaptureSession? {
get {
return videoPreviewLayer.session
}
set {
videoPreviewLayer.session = newValue
}
}
// MARK: UIView
override class var layerClass: AnyClass {
return AVCaptureVideoPreviewLayer.self
}
}
VisionViewController:
import UIKit
import AVFoundation
import Vision
class VisionViewController: TextScanViewController {
var request: VNRecognizeTextRequest!
// Temporal string tracker
let numberTracker = StringTracker()
override func viewDidLoad() {
// Set up vision request before letting ViewController set up the camera
// so that it exists when the first buffer is received.
request = VNRecognizeTextRequest(completionHandler: recognizeTextHandler)
super.viewDidLoad()
}
// MARK: - Text recognition
// Vision recognition handler.
func recognizeTextHandler(request: VNRequest, error: Error?) {
var numbers = [String]()
var redBoxes = [CGRect]() // Shows all recognized text lines
var greenBoxes = [CGRect]() // Shows words that might be serials
guard let results = request.results as? [VNRecognizedTextObservation] else {
return
}
let maximumCandidates = 1
for visionResult in results {
guard let candidate = visionResult.topCandidates(maximumCandidates).first else { continue }
// Draw red boxes around any detected text, and green boxes around
// any detected phone numbers. The phone number may be a substring
// of the visionResult. If a substring, draw a green box around the
// number and a red box around the full string. If the number covers
// the full result only draw the green box.
var numberIsSubstring = true
if let result = candidate.string.extractPhoneNumber() {
let (range, number) = result
// Number may not cover full visionResult. Extract bounding box
// of substring.
if let box = try? candidate.boundingBox(for: range)?.boundingBox {
numbers.append(number)
greenBoxes.append(box)
numberIsSubstring = !(range.lowerBound == candidate.string.startIndex && range.upperBound == candidate.string.endIndex)
}
}
if numberIsSubstring {
redBoxes.append(visionResult.boundingBox)
}
}
// Log any found numbers.
numberTracker.logFrame(strings: numbers)
show(boxGroups: [(color: UIColor.red.cgColor, boxes: redBoxes), (color: UIColor.green.cgColor, boxes: greenBoxes)])
// Check if we have any temporally stable numbers.
if let sureNumber = numberTracker.getStableString() {
showString(string: sureNumber)
numberTracker.reset(string: sureNumber)
}
}
override func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
// Configure for running in real-time.
request.recognitionLevel = .fast
// Language correction won't help recognizing phone numbers. It also
// makes recognition slower.
request.usesLanguageCorrection = false
// Only run on the region of interest for maximum speed.
request.regionOfInterest = regionOfInterest
let requestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: textOrientation, options: [:])
do {
try requestHandler.perform([request])
} catch {
print(error)
}
}
}
// MARK: - Bounding box drawing
// Draw a box on screen. Must be called from main queue.
var boxLayer = [CAShapeLayer]()
func draw(rect: CGRect, color: CGColor) {
let layer = CAShapeLayer()
layer.opacity = 0.5
layer.borderColor = color
layer.borderWidth = 2
layer.frame = rect
boxLayer.append(layer)
previewView.videoPreviewLayer.insertSublayer(layer, at: 1)
}
// Remove all drawn boxes. Must be called on main queue.
func removeBoxes() {
for layer in boxLayer {
layer.removeFromSuperlayer()
}
boxLayer.removeAll()
}
typealias ColoredBoxGroup = (color: CGColor, boxes: [CGRect])
// Draws groups of colored boxes.
func show(boxGroups: [ColoredBoxGroup]) {
DispatchQueue.main.async {
let layer = self.previewView.videoPreviewLayer
self.removeBoxes()
for boxGroup in boxGroups {
let color = boxGroup.color
for box in boxGroup.boxes {
let rect = layer.layerRectConverted(fromMetadataOutputRect: box.applying(self.visionToAVFTransform))
self.draw(rect: rect, color: color)
}
}
}
}
}
StringUtils:
import Foundation
extension Character {
// Given a list of allowed characters, try to convert self to those in list
// if not already in it. This handles some common misclassifications for
// characters that are visually similar and can only be correctly recognized
// with more context and/or domain knowledge. Some examples (should be read
// in Menlo or some other font that has different symbols for all characters):
// 1 and l are the same character in Times New Roman
// I and l are the same character in Helvetica
// 0 and O are extremely similar in many fonts
// oO, wW, cC, sS, pP and others only differ by size in many fonts
func getSimilarCharacterIfNotIn(allowedChars: String) -> Character {
let conversionTable = [
"s": "S",
"S": "5",
"5": "S",
"o": "O",
"Q": "O",
"O": "0",
"0": "O",
"l": "I",
"I": "1",
"1": "I",
"B": "8",
"8": "B"
]
// Allow a maximum of two substitutions to handle 's' -> 'S' -> '5'.
let maxSubstitutions = 2
var current = String(self)
var counter = 0
while !allowedChars.contains(current) && counter < maxSubstitutions {
if let altChar = conversionTable[current] {
current = altChar
counter += 1
} else {
// Doesn't match anything in our table. Give up.
break
}
}
return current.first!
}
}
extension String {
// Extracts the first US-style phone number found in the string, returning
// the range of the number and the number itself as a tuple.
// Returns nil if no number is found.
func extractPhoneNumber() -> (Range<String.Index>, String)? {
// Do a first pass to find any substring that could be a US phone
// number. This will match the following common patterns and more:
// xxx-xxx-xxxx
// xxx xxx xxxx
// (xxx) xxx-xxxx
// (xxx)xxx-xxxx
// xxx.xxx.xxxx
// xxx xxx-xxxx
// xxx/xxx.xxxx
// +1-xxx-xxx-xxxx
// Note that this doesn't only look for digits since some digits look
// very similar to letters. This is handled later.
let pattern = #"""
(?x) # Verbose regex, allows comments
(?:\+1-?)? # Potential international prefix, may have -
[(]? # Potential opening (
\b(\w{3}) # Capture xxx
[)]? # Potential closing )
[\ -./]? # Potential separator
(\w{3}) # Capture xxx
[\ -./]? # Potential separator
(\w{4})\b # Capture xxxx
"""#
guard let range = self.range(of: pattern, options: .regularExpression, range: nil, locale: nil) else {
// No phone number found.
return nil
}
// Potential number found. Strip out punctuation, whitespace and country
// prefix.
var phoneNumberDigits = ""
let substring = String(self[range])
let nsrange = NSRange(substring.startIndex..., in: substring)
do {
// Extract the characters from the substring.
let regex = try NSRegularExpression(pattern: pattern, options: [])
if let match = regex.firstMatch(in: substring, options: [], range: nsrange) {
for rangeInd in 1 ..< match.numberOfRanges {
let range = match.range(at: rangeInd)
let matchString = (substring as NSString).substring(with: range)
phoneNumberDigits += matchString as String
}
}
} catch {
print("Error \(error) when creating pattern")
}
// Must be exactly 10 digits.
guard phoneNumberDigits.count == 10 else {
return nil
}
// Substitute commonly misrecognized characters, for example: 'S' -> '5' or 'l' -> '1'
var result = ""
let allowedChars = "0123456789"
for var char in phoneNumberDigits {
char = char.getSimilarCharacterIfNotIn(allowedChars: allowedChars)
guard allowedChars.contains(char) else {
return nil
}
result.append(char)
}
return (range, result)
}
}
class StringTracker {
var frameIndex: Int64 = 0
typealias StringObservation = (lastSeen: Int64, count: Int64)
// Dictionary of seen strings. Used to get stable recognition before
// displaying anything.
var seenStrings = [String: StringObservation]()
var bestCount = Int64(0)
var bestString = ""
func logFrame(strings: [String]) {
for string in strings {
if seenStrings[string] == nil {
seenStrings[string] = (lastSeen: Int64(0), count: Int64(-1))
}
seenStrings[string]?.lastSeen = frameIndex
seenStrings[string]?.count += 1
print("Seen \(string) \(seenStrings[string]?.count ?? 0) times")
}
var obsoleteStrings = [String]()
// Go through strings and prune any that have not been seen in while.
// Also find the (non-pruned) string with the greatest count.
for (string, obs) in seenStrings {
// Remove previously seen text after 30 frames (~1s).
if obs.lastSeen < frameIndex - 30 {
obsoleteStrings.append(string)
}
// Find the string with the greatest count.
let count = obs.count
if !obsoleteStrings.contains(string) && count > bestCount {
bestCount = Int64(count)
bestString = string
}
}
// Remove old strings.
for string in obsoleteStrings {
seenStrings.removeValue(forKey: string)
}
frameIndex += 1
}
func getStableString() -> String? {
// Require the recognizer to see the same string at least 10 times.
if bestCount >= 10 {
return bestString
} else {
return nil
}
}
func reset(string: String) {
seenStrings.removeValue(forKey: string)
bestCount = 0
bestString = ""
}
}
AppDelegate:
import UIKit
#UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
var window: UIWindow?
}
I was using the wrong class on the view controller.. instead of it being TextScanViewController it should have been set to Visionviewcontroller... it was skipping a whole class. I didn't realize how classes are inherited and that there was an important order to them. I have a lot to learn but learning a lot! :)

Swift -Recording in Stereo using the Built-In Microphones

I want to use stereo instead of mono. I downloaded Apple's sample app. It says
Because a user can hold an iOS device in a variety of ways, you need to specify the orientation of the right and left channels in the stereo field. Set the built-in microphone’s directionality by configuring:
Polar pattern. The system represents the individual device >microphones, and beamformers that use multiple microphones, as >data sources. Select the front or back data source and set its >polar pattern to stereo.
Input orientation. When recording video, set the input >orientation to match the video orientation. When recording audio >only, set the input orientation to match the user interface >orientation. In both cases, don’t change the orientation during >recording.
I found this YCombinator thread that says there are issues when recording using the stereo speakers and holding the phone in different orientations. In Apple's sample code there is a Orientation enum, but this isn't clearly explained on what the differences are for.
With their app there is a segmented control that lets you choose between which speaker to use but it's not very clear on how to roll this into your own app.
I want to make it seamless so that the user doesn't have to choose, they can simply press record and the AVAudioRecorder takes it from there.
Start/Stop Recording
var micRecorder: AVAudioRecorder?
let audioSettings: [String:Any] = [AVFormatIDKey: kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 2,
AVSampleRateKey: 44100.0,
AVEncoderBitRateKey: 64000,
AVEncoderAudioQualityKey: AVAudioQuality.min.rawValue]
func viewDidLoad() {
super.viewDidLoad()
do {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .default, options: [ .allowBluetoothA2DP, .defaultToSpeaker]
try AVAudioSession.sharedInstance().setActive(true)
} catch {
}
enableBuiltInMic()
}
#IBAction func startRecording(_ sender: UIButton) {
do {
// How to use the AVAudioRecorder to start recording using the Stereo speaker here?
let fileURL = NSTemporaryDirectory()...
micRecorder = try AVAudioRecorder(url: fileURL, settings: audioSettings)
micRecorder?.delegate = self
micRecorder?.isMeteringEnabled = true
micRecorder?.record()
} catch {
}
}
This is the code that I pulled from Apple's sample app. I stuffed into the same file that the micRecorder is in above.
enum Orientation: Int {
case unknown = 0
case portrait = 1
case portraitUpsideDown = 2
case landscapeLeft = 4
case landscapeRight = 3
}
fileprivate extension Orientation {
// Convenience property to retrieve the AVAudioSession.StereoOrientation.
var inputOrientation: AVAudioSession.StereoOrientation {
return AVAudioSession.StereoOrientation(rawValue: rawValue)!
}
}
var isStereoSupported = false
private var windowOrientation: UIInterfaceOrientation { view.window?.windowScene?.interfaceOrientation ?? .unknown }
struct RecordingOption: Comparable {
let name: String
fileprivate let dataSourceName: String
static func < (lhs: RecordingOption, rhs: RecordingOption) -> Bool {
lhs.name < rhs.name
}
}
var recordingOptions: [RecordingOption] = {
let front = AVAudioSession.Orientation.front
let back = AVAudioSession.Orientation.back
let bottom = AVAudioSession.Orientation.bottom
let session = AVAudioSession.sharedInstance()
guard let dataSources = session.preferredInput?.dataSources else { return [] }
var options = [RecordingOption]()
dataSources.forEach { dataSource in
switch dataSource.orientation {
case front:
options.append(RecordingOption(name: "Front Stereo", dataSourceName: front.rawValue))
case back:
options.append(RecordingOption(name: "Back Stereo", dataSourceName: back.rawValue))
case bottom:
options.append(RecordingOption(name: "Mono", dataSourceName: bottom.rawValue))
default: ()
}
}
// Sort alphabetically
options.sort()
return options
}()
func enableBuiltInMic() {
// Get the shared audio session.
let session = AVAudioSession.sharedInstance()
// Find the built-in microphone input.
guard let availableInputs = session.availableInputs,
let builtInMicInput = availableInputs.first(where: { $0.portType == .builtInMic }) else {
print("The device must have a built-in microphone.")
return
}
// Make the built-in microphone input the preferred input.
do {
try session.setPreferredInput(builtInMicInput)
} catch {
print("Unable to set the built-in mic as the preferred input.")
}
}
func selectRecordingOption(_ option: RecordingOption, orientation: Orientation, completion: (StereoLayout) -> Void) {
// Get the shared audio session.
let session = AVAudioSession.sharedInstance()
// Find the built-in microphone input's data sources,
// and select the one that matches the specified name.
guard let preferredInput = session.preferredInput,
let dataSources = preferredInput.dataSources,
let newDataSource = dataSources.first(where: { $0.dataSourceName == option.dataSourceName }),
let supportedPolarPatterns = newDataSource.supportedPolarPatterns else {
completion(.none)
return
}
do {
isStereoSupported = supportedPolarPatterns.contains(.stereo)
// If the data source supports stereo, set it as the preferred polar pattern.
if isStereoSupported {
// Set the preferred polar pattern to stereo.
try newDataSource.setPreferredPolarPattern(.stereo)
}
// Set the preferred data source and polar pattern.
try preferredInput.setPreferredDataSource(newDataSource)
// Update the input orientation to match the current user interface orientation.
try session.setPreferredInputOrientation(orientation.inputOrientation)
} catch {
print("Unable to select the \(option.dataSourceName) data source.")
}
// Call the completion handler with the updated stereo layout.
completion(StereoLayout(orientation: newDataSource.orientation!,
stereoOrientation: session.inputOrientation))
}
Stero File:
import AVFoundation
enum StereoLayout: String {
case none = "none"
case mono = "Mono"
case frontLandscapeLeft = "Front LandscapeLeft"
case frontLandscapeRight = "Front LandscapeRight"
case frontPortrait = "Front Portrait"
case frontPortraitUpsideDown = "Front PortraitUpsideDown"
case backLandscapeLeft = "Back LandscapeLeft"
case backLandscapeRight = "Back LandscapeRight"
case backPortrait = "Back Portrait"
case backPortraitUpsideDown = "Back PortraitUpsideDown"
init(orientation: AVAudioSession.Orientation, stereoOrientation: AVAudioSession.StereoOrientation) {
let front: AVAudioSession.Orientation = .front
let back: AVAudioSession.Orientation = .back
switch (orientation, stereoOrientation) {
// Front
case (front, .none):
self.init(rawValue: StereoLayout.mono.rawValue)!
case (front, .landscapeLeft):
self.init(rawValue: StereoLayout.frontLandscapeLeft.rawValue)!
case (front, .landscapeRight):
self.init(rawValue: StereoLayout.frontLandscapeRight.rawValue)!
case (front, .portrait):
self.init(rawValue: StereoLayout.frontPortrait.rawValue)!
case (front, .portraitUpsideDown):
self.init(rawValue: StereoLayout.frontPortraitUpsideDown.rawValue)!
// Back
case (back, .none):
self.init(rawValue: StereoLayout.mono.rawValue)!
case (back, .landscapeLeft):
self.init(rawValue: StereoLayout.backLandscapeLeft.rawValue)!
case (back, .landscapeRight):
self.init(rawValue: StereoLayout.backLandscapeRight.rawValue)!
case (back, .portrait):
self.init(rawValue: StereoLayout.backPortrait.rawValue)!
case (back, .portraitUpsideDown):
self.init(rawValue: StereoLayout.backPortraitUpsideDown.rawValue)!
default:
self.init(rawValue: StereoLayout.none.rawValue)!
}
}
}

Using same swift UIlabels for different function values

I am programming an iOS app using swift where i have 3 UILabels which will show data different sensors data into the same corresponding labels.
These are 3 labels which i am using.
#IBOutlet weak var xAccel: UILabel!
#IBOutlet weak var yAccel: UILabel!
#IBOutlet weak var zAccel: UILabel!
I am using UIsegmentedControl to change the display of data which is as follows.
#IBAction func AccelDidChange(_ sender: UISegmentedControl) {
switch sender.selectedSegmentIndex {
case 0:
myAccelerometer()
break
case 1:
myGyroscope()
break
default:
myAccelerometer()
}
Above used 2 functions are as follows
func myAccelerometer() {
// sets the time of each update
motion.accelerometerUpdateInterval = 0.1
//accessing the data from the accelerometer
motion.startAccelerometerUpdates(to: OperationQueue.current!) { (data, error) in
// can print the data on the console for testing purpose
//print(data as Any)
if let trueData = data {
self.view.reloadInputViews()
//setting different coordiantes to respective variables
let x = trueData.acceleration.x
let y = trueData.acceleration.y
let z = trueData.acceleration.z
//setting the variable values to label on UI
self.SensorName.text = "Accelerometer Data"
self.xAccel.text = "x : \(x)"
self.yAccel.text = "y : \(y)"
self.zAccel.text = "z : \(z)"
}
}
}
func myGyroscope() {
motion.gyroUpdateInterval = 0.1
motion.startGyroUpdates(to: OperationQueue.current!) { (data, error) in
if let trueData = data {
self.view.reloadInputViews()
//setting different coordiantes to respective variables
let x = trueData.rotationRate.x
let y = trueData.rotationRate.y
let z = trueData.rotationRate.z
//setting the variable values to label on UI
self.SensorName.text = "Gyroscope Data"
self.xAccel.text = "x: \(x)"
self.yAccel.text = "y: \(y)"
self.zAccel.text = "z: \(z)"
}
}
}
**
Problem is it keeps on displaying both the Accelerometer and Gyroscope data on UILabels at the same time instead of only showing the data of a particular sensor when tapped. I have tried to use the break option but still not working. If any one could point out the possible solution, that would be great. Thanks
**
EIDT -
Here is the output on screen where you can see the values fluctuates between different sensors. I only want readings from 1 sensor at a time.
https://imgur.com/a/21xW4au
#IBAction func AccelDidChange(_ sender: UISegmentedControl) {
switch sender.selectedSegmentIndex {
case 0:
motion.stopGyroUpdates()
myAccelerometer()
break
case 1:
motion.stopDeviceMotionUpdates()
myGyroscope()
break
default:
myAccelerometer()
}
}
You need to stop unneeded resource before the switch. Try this please

AudioKit: How to toggle between two different AKOperationGenerator-Oscillators

I want to build an Oscillator with mode-switch between square and triangle waveform, using AKOperation.squareWave() and AKOperation.triangleWave(). When I try to build it like the following, it does not work. Whats wrong? thnx!
import AudioKitPlaygrounds
import AudioKit
let osc_square = AKOperationGenerator { parameters in
return AKOperation.squareWave(
frequency: parameters[0],
amplitude: parameters[1]
)
}
let osc_tri = AKOperationGenerator { parameters in
return AKOperation.triangleWave(
frequency: parameters[0],
amplitude: parameters[1]
)
}
var currentOsc: AKOperationGenerator = osc_square
var currentMode:Int = 1
AudioKit.output = currentOsc
try AudioKit.start()
setCurrentVCOParameters()
currentOsc.play()
let playgroundWidth = 500
import AudioKitUI
class LiveView: AKLiveViewController {
override func viewDidLoad() {
addTitle("Switch AKOperationGenerator")
let button = AKButton(title: "Mode \(currentMode)") { _ in
if currentMode == 1 {
setVCOMode(2)
}
else if currentMode == 2 {
setVCOMode(1)
}
}
addView(button)
}
}
func setVCOMode(_ modeIndex: Int) {
currentMode = modeIndex
setCurrentVCO()
}
func setCurrentVCO() {
currentOsc.stop()
switch currentMode {
case 1:
currentOsc = osc_square
case 2:
currentOsc = osc_tri
default:
currentOsc = osc_square
}
setCurrentVCOParameters()
currentOsc.start()
}
func setCurrentVCOParameters() {
currentOsc.parameters[0] = 110.0
currentOsc.parameters[1] = 0.5
}
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
PlaygroundPage.current.liveView = LiveView()
On startup OSC is running its square wave well, but when I touch the toggle switch silence appears. Toggling back brings back the square.
Seems like it is not possible that way. I now have running the different oscillators in parallel, starting/stopping the ones I need when I need them running/stopped like
var currentVCO1Mode: VCOMode = .sqr
let allVCO1Generators: [AKOperationGenerator]!
enum CurrentVCO1: Int {
case sqr, tri
}
var currentVCO1:CurrentVCO1
func setCurrentVCO1() {
vco1_square.stop()
vco1_tri.stop()
switch currentVCO1Mode {
case .sqr:
vco1_square.start()
currentVCO1 = .sqr
case .tri:
vco1_tri.start()
currentVCO1 = .tri
}
setCurrentVCO1Parameters()
setCurrentVCO2()
}
etc.

Using GKComponent to make reusable shields

I am trying to make a simple game: Space ship on the bottom of the screen shooting asteroids "falling" from the top of the screen.
I am learning ECS and GameplayKit, and have been trying to turn shields into a component. I've heavily relied on Apple's DemoBots sample app, and have lifted the PhysicsComponent, ColliderType, and ContactNotifiableType from the sample code.
A shield needs to render the assets assoicated with it (one for full shields and one for half shields), a different physics body from the ship because it's radius is noticeably larger than the ship, and to keep track of it's state. To do this I wrote:
final class ShieldComponent: GKComponent {
enum ShieldLevel: Int {
case full = 0, half, none
}
var currentShieldLevel: ShieldLevel = .full {
didSet {
switch currentShieldLevel {
case .full:
node.isHidden = false
node.texture = SKTexture(image: #imageLiteral(resourceName: "shield"))
case .half:
node.isHidden = false
node.texture = SKTexture(image: #imageLiteral(resourceName: "damagedShield"))
case .none:
node.isHidden = true
}
}
}
let node: SKSpriteNode
override init() {
node = SKSpriteNode(imageNamed: "shield")
super.init()
node.physicsBody = {
let physicsBody = SKPhysicsBody(circleOfRadius: node.frame.size.width / 2)
physicsBody.pinned = true
physicsBody.allowsRotation = false
physicsBody.affectedByGravity = false
ColliderType.definedCollisions[.shield] = [
.obstacle,
.powerUp
]
physicsBody.categoryBitMask = ColliderType.shield.rawValue
physicsBody.contactTestBitMask = ColliderType.obstacle.rawValue
physicsBody.collisionBitMask = ColliderType.obstacle.rawValue
return physicsBody
}()
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func loseShields() {
if let newShieldLevel = ShieldLevel(rawValue: self.currentShieldLevel.rawValue + 1) {
self.currentShieldLevel = newShieldLevel
}
}
func restoreShields() {
self.currentShieldLevel = .full
}
}
And in my Ship initializer I do this:
let shieldComponent = ShieldComponent()
renderComponent.node.addChild(shieldComponent.node)
It would great if I could reuse the RenderComponent, and PhysicsComponent from DemoBots have I have with my ship and asteroid GKEntity subclasses, but components cannot have components. I had made ShieldComponent a ContactNotifiableType, but because the shield node does not actually belong to the ship entity.
I know I'm clearly coming at this wrong, and I'm at a loss of how to correct this. I'm hoping to get an example of how to make a shield component.
You must understand that components are meant to handle only one behaviour. so git rid of the physics code in your init() function and instead Build a Physics component similar to the one in DemoBots.
Tweak Your render Component to your liking. The problem with using DemoBots code is that its not perfectly Suited. So lets tweak it
class RenderComponent: GKComponent {
// MARK: Properties
// The `RenderComponent` vends a node allowing an entity to be rendered in a scene.
#objc let node = SKNode()
var sprite = SKSpriteNode
// init
init(imageNamed name: String) {
self.sprite = SKSpriteNode(imageNamed: name)
}
// MARK: GKComponent
override func didAddToEntity() {
node.entity = entity
}
override func willRemoveFromEntity() {
node.entity = nil
}
}
final class ShieldComponent: GKComponent {
var node : SKSpriteNode
//add reference to ship entity
weak var ship: Ship?
enum ShieldLevel: Int {
case full = 0, half, none
}
var currentShieldLevel: ShieldLevel = .full {
didSet {
switch currentShieldLevel {
case .full:
node.isHidden = false
node.texture = SKTexture(image: #imageLiteral(resourceName: "shield"))
case .half:
node.isHidden = false
node.texture = SKTexture(image: #imageLiteral(resourceName: "damagedShield"))
case .none:
node.isHidden = true
}
}
}
// Grab the visual component from the entity. Unwrap it with a Guard. If the Entity doesnt have the component you get an error.
var visualComponentRef : RenderComponent {
guard let renderComponent = ship?.component(ofType: RenderComponent.self) else {
fatalError("entity must have a render component")
}
}
override init(shipEntity ship: Ship) {
let visualComponent = RenderComponent(imageNamed: "imageName")
node = visualComponent.sprite
self.ship = ship
super.init()
// get rid of this. Use a Physics Component for this, Kep your components to one behaviour only. Make them as dumb as possible.
// node.physicsBody = {
// let physicsBody = SKPhysicsBody(circleOfRadius: node.frame.size.width / 2)
// physicsBody.pinned = true
// physicsBody.allowsRotation = false
// physicsBody.affectedByGravity = false
//
// ColliderType.definedCollisions[.shield] = [
// .obstacle,
// .powerUp
// ]
//
// physicsBody.categoryBitMask = ColliderType.shield.rawValue
// physicsBody.contactTestBitMask = ColliderType.obstacle.rawValue
// physicsBody.collisionBitMask = ColliderType.obstacle.rawValue
// return physicsBody
// }()
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func loseShields() {
if let newShieldLevel = ShieldLevel(rawValue: self.currentShieldLevel.rawValue + 1) {
self.currentShieldLevel = newShieldLevel
}
}
func restoreShields() {
self.currentShieldLevel = .full
}
};
Make sure to look at how I changed my components interaction with the entity. You can create a reference Object to Ship Entity Directly. Or you can check weather or not the ShieldComponent has an entity with the entity? property. (beware. it is an optional, so unwrap it.
Once you have the Entity reference you can then search it for other Components and retrieve The using component(ofType:_) property.
eg ship?.component(ofType: RenderComponent.self)
Other than this, I think you have a decent shield component

Resources