UIDynamicAnimator - Move view based on Device Motion Roll - ios

I am trying to read my devices Motion, specifically the Roll of the device when it is in Landscape Mode, and translate the angle returned into a Position of a UIView (Basically making a on screen "Level" to show the user that the phone is at an ideal angle).
This code gives the desired roll result, but for some reason is not updating the levelIndicator view as expected. I am not getting any errors, so I must be using UIPushBehavior incorrectly, but I am unclear what I need to fix. I am not sure about setting to new Y position of the indicator on Motion Update.
import UIKit
import AVFoundation
import CoreMotion
import GLKit
class CameraView: BaseViewController {
var animator : UIDynamicAnimator? = nil;
var currentRoll : Float = 0.0;
let manager = CMMotionManager()
let motionQueue = NSOperationQueue()
var countingDown:Bool = false;
#IBOutlet weak var levelIndicator: UIView!
#IBOutlet weak var level: UIView!
override func viewDidLoad() {
super.viewDidLoad()
self.animator = UIDynamicAnimator(referenceView: self.level)
let continuousPush: UIPushBehavior = UIPushBehavior(items: [levelIndicator], mode: UIPushBehaviorMode.Continuous)
self.animator?.addBehavior(continuousPush)
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(true)
self.startReadingMotion()
}
func startReadingMotion() {
if manager.deviceMotionAvailable {
manager.deviceMotionUpdateInterval = 0.1
manager.startDeviceMotionUpdatesToQueue(motionQueue, withHandler: checkStability)
}
}
func checkStability(motion: CMDeviceMotion!, error: NSError!) {
var orientation = UIDevice.currentDevice().orientation
if (error != nil) {
NSLog("\(error)")
}
var quat = motion.attitude.quaternion
//We Probably only need to check the Angle of the Roll (Phone Angle in Landscape mode)
var roll = GLKMathRadiansToDegrees(Float(atan2(2*(quat.y*quat.w - quat.x*quat.z), 1 - 2*quat.y*quat.y - 2*quat.z*quat.z))) ;
//Other Angles are available for more refinement to stabilty
//var pitch = GLKMathRadiansToDegrees(Float(atan2(2*(quat.x*quat.w + quat.y*quat.z), 1 - 2*quat.x*quat.x - 2*quat.z*quat.z)));
//var yaw = GLKMathRadiansToDegrees(Float(asin(2*quat.x*quat.y + 2*quat.w*quat.z)));
if(orientation == .LandscapeLeft) {
roll *= -1
}
if(roll > 100) {
roll = 100
} else if(roll < 0) {
roll = 0
}
self.currentRoll = roll
var pos = self.level.frame.height*CGFloat(roll/100)
var rect = self.levelIndicator.frame
rect.origin.y = pos
self.levelIndicator.frame = rect
if(roll > 85 && roll < 87) {
if(!countingDown) {
//This is the ideal roll position of the phone
self.levelIndicator.backgroundColor = UIColor.redColor()
}
} else {
countingDown = false;
self.levelIndicator.backgroundColor = UIColor.blackColor()
}
}
func stopReading() {
manager.stopDeviceMotionUpdates();
}
}

For anyone interested I ended up not using the UIDynamicAnimator for this but found a much simpler solution converting the returned Radians of the attitude change and using that to check the roll of the device. Also added a dispatch to the main queue to update the UI of the onscreen level.
func checkStability(motion: CMDeviceMotion!, error: NSError!) {
var orientation = UIDevice.currentDevice().orientation
if (error != nil) {
NSLog("\(error)")
}
var roll:Float = 0.0
if let attitude = motion.attitude {
roll = GLKMathRadiansToDegrees(Float(attitude.roll))
}
dispatch_async(dispatch_get_main_queue()) {
var pos = self.level.frame.height*CGFloat(roll/100)
var rect = self.levelIndicator.frame
rect.origin.y = self.level.frame.height-pos
self.levelIndicator.frame = rect
if(roll > 85 && roll < 90) {
//This is where I want the Roll to be
} else {
//The Roll is not correct yet
}
}
}

Related

Number text recognition not highlighting/recognizing text

I am following the apple phone number recognition sample. Normally it creates a red outline around the recognized text. Mine does not seem to do recognizing the text and creating the red outline even though I used their code. The only difference is my view controller class is called "TextScanViewController" where their's is just "ViewController". I went through and made sure that any "ViewControllers" were changed to "TextScanViewController". Am I missing something else that I should change?
Here is what it should look like (when I use the original Apple project) compared to what it is doing (should have red outlines but is not showing them even if the text is perfectly in the center of the rectangle)
Should look like:
Looks like:
There are 5 different swift files I am using (PreviewView, TextScanViewController, VisionViewController, StringUtils, AppDelegate)
TextScanViewController:
import UIKit
import AVFoundation
import Vision
class TextScanViewController: UIViewController {
// MARK: - UI objects
#IBOutlet weak var previewView: PreviewView!
#IBOutlet weak var cutoutView: UIView!
#IBOutlet weak var numberView: UILabel!
var maskLayer = CAShapeLayer()
// Device orientation. Updated whenever the orientation changes to a
// different supported orientation.
var currentOrientation = UIDeviceOrientation.portrait
// MARK: - Capture related objects
private let captureSession = AVCaptureSession()
let captureSessionQueue = DispatchQueue(label: "com.example.apple-samplecode.CaptureSessionQueue")
var captureDevice: AVCaptureDevice?
var videoDataOutput = AVCaptureVideoDataOutput()
let videoDataOutputQueue = DispatchQueue(label: "com.example.apple-samplecode.VideoDataOutputQueue")
// MARK: - Region of interest (ROI) and text orientation
// Region of video data output buffer that recognition should be run on.
// Gets recalculated once the bounds of the preview layer are known.
var regionOfInterest = CGRect(x: 0, y: 0, width: 1, height: 1)
// Orientation of text to search for in the region of interest.
var textOrientation = CGImagePropertyOrientation.up
// MARK: - Coordinate transforms
var bufferAspectRatio: Double!
// Transform from UI orientation to buffer orientation.
var uiRotationTransform = CGAffineTransform.identity
// Transform bottom-left coordinates to top-left.
var bottomToTopTransform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
// Transform coordinates in ROI to global coordinates (still normalized).
var roiToGlobalTransform = CGAffineTransform.identity
// Vision -> AVF coordinate transform.
var visionToAVFTransform = CGAffineTransform.identity
// MARK: - View controller methods
override func viewDidLoad() {
super.viewDidLoad()
// Set up preview view.
previewView.session = captureSession
// Set up cutout view.
cutoutView.backgroundColor = UIColor.gray.withAlphaComponent(0.5)
maskLayer.backgroundColor = UIColor.clear.cgColor
maskLayer.fillRule = .evenOdd
cutoutView.layer.mask = maskLayer
// Starting the capture session is a blocking call. Perform setup using
// a dedicated serial dispatch queue to prevent blocking the main thread.
captureSessionQueue.async {
self.setupCamera()
// Calculate region of interest now that the camera is setup.
DispatchQueue.main.async {
// Figure out initial ROI.
self.calculateRegionOfInterest()
}
}
}
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
// Only change the current orientation if the new one is landscape or
// portrait. You can't really do anything about flat or unknown.
let deviceOrientation = UIDevice.current.orientation
if deviceOrientation.isPortrait || deviceOrientation.isLandscape {
currentOrientation = deviceOrientation
}
// Handle device orientation in the preview layer.
if let videoPreviewLayerConnection = previewView.videoPreviewLayer.connection {
if let newVideoOrientation = AVCaptureVideoOrientation(deviceOrientation: deviceOrientation) {
videoPreviewLayerConnection.videoOrientation = newVideoOrientation
}
}
// Orientation changed: figure out new region of interest (ROI).
calculateRegionOfInterest()
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
updateCutout()
}
// MARK: - Setup
func calculateRegionOfInterest() {
// In landscape orientation the desired ROI is specified as the ratio of
// buffer width to height. When the UI is rotated to portrait, keep the
// vertical size the same (in buffer pixels). Also try to keep the
// horizontal size the same up to a maximum ratio.
let desiredHeightRatio = 0.15
let desiredWidthRatio = 0.6
let maxPortraitWidth = 0.8
// Figure out size of ROI.
let size: CGSize
if currentOrientation.isPortrait || currentOrientation == .unknown {
size = CGSize(width: min(desiredWidthRatio * bufferAspectRatio, maxPortraitWidth), height: desiredHeightRatio / bufferAspectRatio)
} else {
size = CGSize(width: desiredWidthRatio, height: desiredHeightRatio)
}
// Make it centered.
regionOfInterest.origin = CGPoint(x: (1 - size.width) / 2, y: (1 - size.height) / 2)
regionOfInterest.size = size
// ROI changed, update transform.
setupOrientationAndTransform()
// Update the cutout to match the new ROI.
DispatchQueue.main.async {
// Wait for the next run cycle before updating the cutout. This
// ensures that the preview layer already has its new orientation.
self.updateCutout()
}
}
func updateCutout() {
// Figure out where the cutout ends up in layer coordinates.
let roiRectTransform = bottomToTopTransform.concatenating(uiRotationTransform)
let cutout = previewView.videoPreviewLayer.layerRectConverted(fromMetadataOutputRect: regionOfInterest.applying(roiRectTransform))
// Create the mask.
let path = UIBezierPath(rect: cutoutView.frame)
path.append(UIBezierPath(rect: cutout))
maskLayer.path = path.cgPath
// Move the number view down to under cutout.
var numFrame = cutout
numFrame.origin.y += numFrame.size.height
numberView.frame = numFrame
}
func setupOrientationAndTransform() {
// Recalculate the affine transform between Vision coordinates and AVF coordinates.
// Compensate for region of interest.
let roi = regionOfInterest
roiToGlobalTransform = CGAffineTransform(translationX: roi.origin.x, y: roi.origin.y).scaledBy(x: roi.width, y: roi.height)
// Compensate for orientation (buffers always come in the same orientation).
switch currentOrientation {
case .landscapeLeft:
textOrientation = CGImagePropertyOrientation.up
uiRotationTransform = CGAffineTransform.identity
case .landscapeRight:
textOrientation = CGImagePropertyOrientation.down
uiRotationTransform = CGAffineTransform(translationX: 1, y: 1).rotated(by: CGFloat.pi)
case .portraitUpsideDown:
textOrientation = CGImagePropertyOrientation.left
uiRotationTransform = CGAffineTransform(translationX: 1, y: 0).rotated(by: CGFloat.pi / 2)
default: // We default everything else to .portraitUp
textOrientation = CGImagePropertyOrientation.right
uiRotationTransform = CGAffineTransform(translationX: 0, y: 1).rotated(by: -CGFloat.pi / 2)
}
// Full Vision ROI to AVF transform.
visionToAVFTransform = roiToGlobalTransform.concatenating(bottomToTopTransform).concatenating(uiRotationTransform)
}
func setupCamera() {
guard let captureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back) else {
print("Could not create capture device.")
return
}
self.captureDevice = captureDevice
// NOTE:
// Requesting 4k buffers allows recognition of smaller text but will
// consume more power. Use the smallest buffer size necessary to keep
// down battery usage.
if captureDevice.supportsSessionPreset(.hd4K3840x2160) {
captureSession.sessionPreset = AVCaptureSession.Preset.hd4K3840x2160
bufferAspectRatio = 3840.0 / 2160.0
} else {
captureSession.sessionPreset = AVCaptureSession.Preset.hd1920x1080
bufferAspectRatio = 1920.0 / 1080.0
}
guard let deviceInput = try? AVCaptureDeviceInput(device: captureDevice) else {
print("Could not create device input.")
return
}
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
// Configure video data output.
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
if captureSession.canAddOutput(videoDataOutput) {
captureSession.addOutput(videoDataOutput)
// NOTE:
// There is a trade-off to be made here. Enabling stabilization will
// give temporally more stable results and should help the recognizer
// converge. But if it's enabled the VideoDataOutput buffers don't
// match what's displayed on screen, which makes drawing bounding
// boxes very hard. Disable it in this app to allow drawing detected
// bounding boxes on screen.
videoDataOutput.connection(with: AVMediaType.video)?.preferredVideoStabilizationMode = .off
} else {
print("Could not add VDO output")
return
}
// Set zoom and autofocus to help focus on very small text.
do {
try captureDevice.lockForConfiguration()
captureDevice.videoZoomFactor = 2
captureDevice.autoFocusRangeRestriction = .near
captureDevice.unlockForConfiguration()
} catch {
print("Could not set zoom level due to error: \(error)")
return
}
captureSession.startRunning()
}
// MARK: - UI drawing and interaction
func showString(string: String) {
// Found a definite number.
// Stop the camera synchronously to ensure that no further buffers are
// received. Then update the number view asynchronously.
captureSessionQueue.sync {
self.captureSession.stopRunning()
DispatchQueue.main.async {
self.numberView.text = string
self.numberView.isHidden = false
}
}
}
#IBAction func handleTap(_ sender: UITapGestureRecognizer) {
captureSessionQueue.async {
if !self.captureSession.isRunning {
self.captureSession.startRunning()
}
DispatchQueue.main.async {
self.numberView.isHidden = true
}
}
}
}
// MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
extension TextScanViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// This is implemented in VisionViewController.
}
}
// MARK: - Utility extensions
extension AVCaptureVideoOrientation {
init?(deviceOrientation: UIDeviceOrientation) {
switch deviceOrientation {
case .portrait: self = .portrait
case .portraitUpsideDown: self = .portraitUpsideDown
case .landscapeLeft: self = .landscapeRight
case .landscapeRight: self = .landscapeLeft
default: return nil
}
}
}
PreviewView:
import Foundation
import UIKit
import AVFoundation
class PreviewView: UIView {
var videoPreviewLayer: AVCaptureVideoPreviewLayer {
guard let layer = layer as? AVCaptureVideoPreviewLayer else {
fatalError("Expected `AVCaptureVideoPreviewLayer` type for layer. Check PreviewView.layerClass implementation.")
}
return layer
}
var session: AVCaptureSession? {
get {
return videoPreviewLayer.session
}
set {
videoPreviewLayer.session = newValue
}
}
// MARK: UIView
override class var layerClass: AnyClass {
return AVCaptureVideoPreviewLayer.self
}
}
VisionViewController:
import UIKit
import AVFoundation
import Vision
class VisionViewController: TextScanViewController {
var request: VNRecognizeTextRequest!
// Temporal string tracker
let numberTracker = StringTracker()
override func viewDidLoad() {
// Set up vision request before letting ViewController set up the camera
// so that it exists when the first buffer is received.
request = VNRecognizeTextRequest(completionHandler: recognizeTextHandler)
super.viewDidLoad()
}
// MARK: - Text recognition
// Vision recognition handler.
func recognizeTextHandler(request: VNRequest, error: Error?) {
var numbers = [String]()
var redBoxes = [CGRect]() // Shows all recognized text lines
var greenBoxes = [CGRect]() // Shows words that might be serials
guard let results = request.results as? [VNRecognizedTextObservation] else {
return
}
let maximumCandidates = 1
for visionResult in results {
guard let candidate = visionResult.topCandidates(maximumCandidates).first else { continue }
// Draw red boxes around any detected text, and green boxes around
// any detected phone numbers. The phone number may be a substring
// of the visionResult. If a substring, draw a green box around the
// number and a red box around the full string. If the number covers
// the full result only draw the green box.
var numberIsSubstring = true
if let result = candidate.string.extractPhoneNumber() {
let (range, number) = result
// Number may not cover full visionResult. Extract bounding box
// of substring.
if let box = try? candidate.boundingBox(for: range)?.boundingBox {
numbers.append(number)
greenBoxes.append(box)
numberIsSubstring = !(range.lowerBound == candidate.string.startIndex && range.upperBound == candidate.string.endIndex)
}
}
if numberIsSubstring {
redBoxes.append(visionResult.boundingBox)
}
}
// Log any found numbers.
numberTracker.logFrame(strings: numbers)
show(boxGroups: [(color: UIColor.red.cgColor, boxes: redBoxes), (color: UIColor.green.cgColor, boxes: greenBoxes)])
// Check if we have any temporally stable numbers.
if let sureNumber = numberTracker.getStableString() {
showString(string: sureNumber)
numberTracker.reset(string: sureNumber)
}
}
override func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
// Configure for running in real-time.
request.recognitionLevel = .fast
// Language correction won't help recognizing phone numbers. It also
// makes recognition slower.
request.usesLanguageCorrection = false
// Only run on the region of interest for maximum speed.
request.regionOfInterest = regionOfInterest
let requestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: textOrientation, options: [:])
do {
try requestHandler.perform([request])
} catch {
print(error)
}
}
}
// MARK: - Bounding box drawing
// Draw a box on screen. Must be called from main queue.
var boxLayer = [CAShapeLayer]()
func draw(rect: CGRect, color: CGColor) {
let layer = CAShapeLayer()
layer.opacity = 0.5
layer.borderColor = color
layer.borderWidth = 2
layer.frame = rect
boxLayer.append(layer)
previewView.videoPreviewLayer.insertSublayer(layer, at: 1)
}
// Remove all drawn boxes. Must be called on main queue.
func removeBoxes() {
for layer in boxLayer {
layer.removeFromSuperlayer()
}
boxLayer.removeAll()
}
typealias ColoredBoxGroup = (color: CGColor, boxes: [CGRect])
// Draws groups of colored boxes.
func show(boxGroups: [ColoredBoxGroup]) {
DispatchQueue.main.async {
let layer = self.previewView.videoPreviewLayer
self.removeBoxes()
for boxGroup in boxGroups {
let color = boxGroup.color
for box in boxGroup.boxes {
let rect = layer.layerRectConverted(fromMetadataOutputRect: box.applying(self.visionToAVFTransform))
self.draw(rect: rect, color: color)
}
}
}
}
}
StringUtils:
import Foundation
extension Character {
// Given a list of allowed characters, try to convert self to those in list
// if not already in it. This handles some common misclassifications for
// characters that are visually similar and can only be correctly recognized
// with more context and/or domain knowledge. Some examples (should be read
// in Menlo or some other font that has different symbols for all characters):
// 1 and l are the same character in Times New Roman
// I and l are the same character in Helvetica
// 0 and O are extremely similar in many fonts
// oO, wW, cC, sS, pP and others only differ by size in many fonts
func getSimilarCharacterIfNotIn(allowedChars: String) -> Character {
let conversionTable = [
"s": "S",
"S": "5",
"5": "S",
"o": "O",
"Q": "O",
"O": "0",
"0": "O",
"l": "I",
"I": "1",
"1": "I",
"B": "8",
"8": "B"
]
// Allow a maximum of two substitutions to handle 's' -> 'S' -> '5'.
let maxSubstitutions = 2
var current = String(self)
var counter = 0
while !allowedChars.contains(current) && counter < maxSubstitutions {
if let altChar = conversionTable[current] {
current = altChar
counter += 1
} else {
// Doesn't match anything in our table. Give up.
break
}
}
return current.first!
}
}
extension String {
// Extracts the first US-style phone number found in the string, returning
// the range of the number and the number itself as a tuple.
// Returns nil if no number is found.
func extractPhoneNumber() -> (Range<String.Index>, String)? {
// Do a first pass to find any substring that could be a US phone
// number. This will match the following common patterns and more:
// xxx-xxx-xxxx
// xxx xxx xxxx
// (xxx) xxx-xxxx
// (xxx)xxx-xxxx
// xxx.xxx.xxxx
// xxx xxx-xxxx
// xxx/xxx.xxxx
// +1-xxx-xxx-xxxx
// Note that this doesn't only look for digits since some digits look
// very similar to letters. This is handled later.
let pattern = #"""
(?x) # Verbose regex, allows comments
(?:\+1-?)? # Potential international prefix, may have -
[(]? # Potential opening (
\b(\w{3}) # Capture xxx
[)]? # Potential closing )
[\ -./]? # Potential separator
(\w{3}) # Capture xxx
[\ -./]? # Potential separator
(\w{4})\b # Capture xxxx
"""#
guard let range = self.range(of: pattern, options: .regularExpression, range: nil, locale: nil) else {
// No phone number found.
return nil
}
// Potential number found. Strip out punctuation, whitespace and country
// prefix.
var phoneNumberDigits = ""
let substring = String(self[range])
let nsrange = NSRange(substring.startIndex..., in: substring)
do {
// Extract the characters from the substring.
let regex = try NSRegularExpression(pattern: pattern, options: [])
if let match = regex.firstMatch(in: substring, options: [], range: nsrange) {
for rangeInd in 1 ..< match.numberOfRanges {
let range = match.range(at: rangeInd)
let matchString = (substring as NSString).substring(with: range)
phoneNumberDigits += matchString as String
}
}
} catch {
print("Error \(error) when creating pattern")
}
// Must be exactly 10 digits.
guard phoneNumberDigits.count == 10 else {
return nil
}
// Substitute commonly misrecognized characters, for example: 'S' -> '5' or 'l' -> '1'
var result = ""
let allowedChars = "0123456789"
for var char in phoneNumberDigits {
char = char.getSimilarCharacterIfNotIn(allowedChars: allowedChars)
guard allowedChars.contains(char) else {
return nil
}
result.append(char)
}
return (range, result)
}
}
class StringTracker {
var frameIndex: Int64 = 0
typealias StringObservation = (lastSeen: Int64, count: Int64)
// Dictionary of seen strings. Used to get stable recognition before
// displaying anything.
var seenStrings = [String: StringObservation]()
var bestCount = Int64(0)
var bestString = ""
func logFrame(strings: [String]) {
for string in strings {
if seenStrings[string] == nil {
seenStrings[string] = (lastSeen: Int64(0), count: Int64(-1))
}
seenStrings[string]?.lastSeen = frameIndex
seenStrings[string]?.count += 1
print("Seen \(string) \(seenStrings[string]?.count ?? 0) times")
}
var obsoleteStrings = [String]()
// Go through strings and prune any that have not been seen in while.
// Also find the (non-pruned) string with the greatest count.
for (string, obs) in seenStrings {
// Remove previously seen text after 30 frames (~1s).
if obs.lastSeen < frameIndex - 30 {
obsoleteStrings.append(string)
}
// Find the string with the greatest count.
let count = obs.count
if !obsoleteStrings.contains(string) && count > bestCount {
bestCount = Int64(count)
bestString = string
}
}
// Remove old strings.
for string in obsoleteStrings {
seenStrings.removeValue(forKey: string)
}
frameIndex += 1
}
func getStableString() -> String? {
// Require the recognizer to see the same string at least 10 times.
if bestCount >= 10 {
return bestString
} else {
return nil
}
}
func reset(string: String) {
seenStrings.removeValue(forKey: string)
bestCount = 0
bestString = ""
}
}
AppDelegate:
import UIKit
#UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
var window: UIWindow?
}
I was using the wrong class on the view controller.. instead of it being TextScanViewController it should have been set to Visionviewcontroller... it was skipping a whole class. I didn't realize how classes are inherited and that there was an important order to them. I have a lot to learn but learning a lot! :)

Toggle flash in ios swift

I am building an image clasifier app. On camera screen I have a switch button which I want to use to toggle flash so that user can switch on flash in low light.
Here is my code:
import UIKit
import AVFoundation
import Vision
// controlling the pace of the machine vision analysis
var lastAnalysis: TimeInterval = 0
var pace: TimeInterval = 0.33 // in seconds, classification will not repeat faster than this value
// performance tracking
let trackPerformance = false // use "true" for performance logging
var frameCount = 0
let framesPerSample = 10
var startDate = NSDate.timeIntervalSinceReferenceDate
var flash=0
class ImageDetectionViewController: UIViewController {
var callBackImageDetection :(State)->Void = { state in
}
#IBOutlet weak var previewView: UIView!
#IBOutlet weak var stackView: UIStackView!
#IBOutlet weak var lowerView: UIView!
#IBAction func swithch(_ sender: UISwitch) {
if(sender.isOn == true)
{
stopActiveSession();
let captureSession=AVCaptureSession()
let captureDevice: AVCaptureDevice?
setupCamera(flash: 1)
}
}
var previewLayer: AVCaptureVideoPreviewLayer!
let bubbleLayer = BubbleLayer(string: "")
let queue = DispatchQueue(label: "videoQueue")
var captureSession = AVCaptureSession()
var captureDevice: AVCaptureDevice?
let videoOutput = AVCaptureVideoDataOutput()
var unknownCounter = 0 // used to track how many unclassified images in a row
let confidence: Float = 0.8
// MARK: Load the Model
let targetImageSize = CGSize(width: 227, height: 227) // must match model data input
lazy var classificationRequest: [VNRequest] = {
do {
// Load the Custom Vision model.
// To add a new model, drag it to the Xcode project browser making sure that the "Target Membership" is checked.
// Then update the following line with the name of your new model.
// let model = try VNCoreMLModel(for: Fruit().model)
let model = try VNCoreMLModel(for: CodigocubeAI().model)
let classificationRequest = VNCoreMLRequest(model: model, completionHandler: self.handleClassification)
return [ classificationRequest ]
} catch {
fatalError("Can't load Vision ML model: \(error)")
}
}()
// MARK: Handle image classification results
func handleClassification(request: VNRequest, error: Error?) {
guard let observations = request.results as? [VNClassificationObservation]
else { fatalError("unexpected result type from VNCoreMLRequest") }
guard let best = observations.first else {
fatalError("classification didn't return any results")
}
// Use results to update user interface (includes basic filtering)
print("\(best.identifier): \(best.confidence)")
if best.identifier.starts(with: "Unknown") || best.confidence < confidence {
if self.unknownCounter < 3 { // a bit of a low-pass filter to avoid flickering
self.unknownCounter += 1
} else {
self.unknownCounter = 0
DispatchQueue.main.async {
self.bubbleLayer.string = nil
}
}
} else {
self.unknownCounter = 0
DispatchQueue.main.async {[weak self] in
guard let strongSelf = self
else
{
return
}
// Trimming labels because they sometimes have unexpected line endings which show up in the GUI
let identifierString = best.identifier.trimmingCharacters(in: CharacterSet.whitespacesAndNewlines)
strongSelf.bubbleLayer.string = identifierString
let state : State = strongSelf.getState(identifierStr: identifierString)
strongSelf.stopActiveSession()
strongSelf.navigationController?.popViewController(animated: true)
strongSelf.callBackImageDetection(state)
}
}
}
func getState(identifierStr:String)->State
{
var state :State = .none
if identifierStr == "entertainment"
{
state = .entertainment
}
else if identifierStr == "geography"
{
state = .geography
}
else if identifierStr == "history"
{
state = .history
}
else if identifierStr == "knowledge"
{
state = .education
}
else if identifierStr == "science"
{
state = .science
}
else if identifierStr == "sports"
{
state = .sports
}
else
{
state = .none
}
return state
}
// MARK: Lifecycle
override func viewDidLoad() {
super.viewDidLoad()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewView.layer.addSublayer(previewLayer)
}
override func viewDidAppear(_ animated: Bool) {
self.edgesForExtendedLayout = UIRectEdge.init(rawValue: 0)
bubbleLayer.opacity = 0.0
bubbleLayer.position.x = self.view.frame.width / 2.0
bubbleLayer.position.y = lowerView.frame.height / 2
lowerView.layer.addSublayer(bubbleLayer)
setupCamera(flash:2)
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
previewLayer.frame = previewView.bounds;
}
// MARK: Camera handling
func setupCamera(flash :Int) {
let deviceDiscovery = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)
if let device = deviceDiscovery.devices.last {
if(flash == 1)
{
if (device.hasTorch) {
do {
try device.lockForConfiguration()
if (device.isTorchAvailable) {
do {
try device.setTorchModeOn(level:0.2 )
}
catch
{
print(error)
}
device.unlockForConfiguration()
}
}
catch
{
print(error)
}
}
}
captureDevice = device
beginSession()
}
}
func beginSession() {
do {
videoOutput.videoSettings = [((kCVPixelBufferPixelFormatTypeKey as NSString) as String) : (NSNumber(value: kCVPixelFormatType_32BGRA) as! UInt32)]
videoOutput.alwaysDiscardsLateVideoFrames = true
videoOutput.setSampleBufferDelegate(self, queue: queue)
captureSession.sessionPreset = .hd1920x1080
captureSession.addOutput(videoOutput)
let input = try AVCaptureDeviceInput(device: captureDevice!)
captureSession.addInput(input)
captureSession.startRunning()
} catch {
print("error connecting to capture device")
}
}
func stopActiveSession()
{
if captureSession.isRunning == true
{
captureSession.stopRunning()
}
}
override func viewWillDisappear(_ animated: Bool) {
self.stopActiveSession()
}
deinit {
print("deinit called")
}
}
// MARK: Video Data Delegate
extension ImageDetectionViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
// called for each frame of video
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let currentDate = NSDate.timeIntervalSinceReferenceDate
// control the pace of the machine vision to protect battery life
if currentDate - lastAnalysis >= pace {
lastAnalysis = currentDate
} else {
return // don't run the classifier more often than we need
}
// keep track of performance and log the frame rate
if trackPerformance {
frameCount = frameCount + 1
if frameCount % framesPerSample == 0 {
let diff = currentDate - startDate
if (diff > 0) {
if pace > 0.0 {
print("WARNING: Frame rate of image classification is being limited by \"pace\" setting. Set to 0.0 for fastest possible rate.")
}
print("\(String.localizedStringWithFormat("%0.2f", (diff/Double(framesPerSample))))s per frame (average)")
}
startDate = currentDate
}
}
// Crop and resize the image data.
// Note, this uses a Core Image pipeline that could be appended with other pre-processing.
// If we don't want to do anything custom, we can remove this step and let the Vision framework handle
// crop and resize as long as we are careful to pass the orientation properly.
guard let croppedBuffer = croppedSampleBuffer(sampleBuffer, targetSize: targetImageSize) else {
return
}
do {
let classifierRequestHandler = VNImageRequestHandler(cvPixelBuffer: croppedBuffer, options: [:])
try classifierRequestHandler.perform(classificationRequest)
} catch {
print(error)
}
}
}
let context = CIContext()
var rotateTransform: CGAffineTransform?
var scaleTransform: CGAffineTransform?
var cropTransform: CGAffineTransform?
var resultBuffer: CVPixelBuffer?
func croppedSampleBuffer(_ sampleBuffer: CMSampleBuffer, targetSize: CGSize) -> CVPixelBuffer? {
guard let imageBuffer: CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
fatalError("Can't convert to CVImageBuffer.")
}
// Only doing these calculations once for efficiency.
// If the incoming images could change orientation or size during a session, this would need to be reset when that happens.
if rotateTransform == nil {
let imageSize = CVImageBufferGetEncodedSize(imageBuffer)
let rotatedSize = CGSize(width: imageSize.height, height: imageSize.width)
guard targetSize.width < rotatedSize.width, targetSize.height < rotatedSize.height else {
fatalError("Captured image is smaller than image size for model.")
}
let shorterSize = (rotatedSize.width < rotatedSize.height) ? rotatedSize.width : rotatedSize.height
rotateTransform = CGAffineTransform(translationX: imageSize.width / 2.0, y: imageSize.height / 2.0).rotated(by: -CGFloat.pi / 2.0).translatedBy(x: -imageSize.height / 2.0, y: -imageSize.width / 2.0)
let scale = targetSize.width / shorterSize
scaleTransform = CGAffineTransform(scaleX: scale, y: scale)
// Crop input image to output size
let xDiff = rotatedSize.width * scale - targetSize.width
let yDiff = rotatedSize.height * scale - targetSize.height
cropTransform = CGAffineTransform(translationX: xDiff/2.0, y: yDiff/2.0)
}
// Convert to CIImage because it is easier to manipulate
let ciImage = CIImage(cvImageBuffer: imageBuffer)
let rotated = ciImage.transformed(by: rotateTransform!)
let scaled = rotated.transformed(by: scaleTransform!)
let cropped = scaled.transformed(by: cropTransform!)
// Note that the above pipeline could be easily appended with other image manipulations.
// For example, to change the image contrast. It would be most efficient to handle all of
// the image manipulation in a single Core Image pipeline because it can be hardware optimized.
// Only need to create this buffer one time and then we can reuse it for every frame
if resultBuffer == nil {
let result = CVPixelBufferCreate(kCFAllocatorDefault, Int(targetSize.width), Int(targetSize.height), kCVPixelFormatType_32BGRA, nil, &resultBuffer)
guard result == kCVReturnSuccess else {
fatalError("Can't allocate pixel buffer.")
}
}
// Render the Core Image pipeline to the buffer
context.render(cropped, to: resultBuffer!)
// For debugging
// let image = imageBufferToUIImage(resultBuffer!)
// print(image.size) // set breakpoint to see image being provided to CoreML
return resultBuffer
}
// Only used for debugging.
// Turns an image buffer into a UIImage that is easier to display in the UI or debugger.
func imageBufferToUIImage(_ imageBuffer: CVImageBuffer) -> UIImage {
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
let context = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
let quartzImage = context!.makeImage()
CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let image = UIImage(cgImage: quartzImage!, scale: 1.0, orientation: .right)
return image
}
I am getting error An AVCaptureOutput instance may not be added to more than one session'
Now I want to give user the facility to toggle flash. How to destroy active camera session and open new with flash on?
Can anyone help me also any other way to achieve this?

How use accelerometer in swift 3

I need help to use accelerometer with swift 3.
This is my code:
var motion = CMMotionManager()
#IBOutlet weak var statusAccel: UILabel!
override func viewDidAppear(_ animated: Bool) {
motion.startAccelerometerUpdates(to: OperationQueue.current!){
(data , error) in
if let trueData = data {
self.view.reloadInputViews()
self.statusAccel.text = "\(trueData)"
}
}
}
It works but it just show me X Y and Z and i want to use Z.
Example : if Z = 2 do something
You can access the acceleration on the Z-axis by calling CMAccelerometerData.acceleration.z. If you are unsure about how to access a certain property of a class, always check the documentation either in Xcode directly or on Apple's documentation website, you can save a lot of time with this approach.
motion.startAccelerometerUpdates(to: OperationQueue.current!, withHandler: { data, error in
guard error == nil else { return }
guard let accelerometerData = data else { return }
if accelerometerData.acceleration.z == 2.0 {
//do something
}
})
The data object that gets returned by startAccelerometerUpdates(...) is of type CMAccelerometerData which has a CMAcceleration property. From this you can get the z component.
var motion = CMMotionManager()
#IBOutlet weak var statusAccel: UILabel!
override func viewDidAppear(_ animated: Bool) {
motion.startAccelerometerUpdates(to: OperationQueue.current!){
(data , error) in
if let trueData = data {
self.view.reloadInputViews()
self.statusAccel.text = "\(trueData)"
if trueData.acceleration.z == 2 {
// do things...
}
}
}
}

stuck either calculating correctly, or with code that crashes

I'm fighting the problem of either having correct calculations (that's awesome) but crashes if I leave a text field blank in the simulator (not awesome) OR: when coded differently, no crashes for blank text fields (not good) but calculates incorrectly (not good).
here's what I'm trying to do:
get user input for 4 values: fibers counted, blank fibers counted (ok if it's zero), fields counted, and sample volume. I put in defaults hoping to take care of any instances where text boxes are blank, hoping to prevent crashes. then the app calculates fiber density, limit of detection, and sample result. pretty simple for now but I hope to add to this later.
My main problem has been optionals. I've read up on that and done my best to code correctly, but there are times when Xcode forces me into a corner. So my app crashes when text fields are blank. or I can prevent crashes but somehow the default values will get used in the calculations and the actual data is off.
I must be using if-let incorrectly, so I tried what you see in the code below.
I'll put the code below, and as always, thanks for any help, and understanding that I'm new and trying to get these concepts down while being self-taught (besides lessons learned here). Thanks!
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var fibersCountedTextField: UITextField!
#IBOutlet weak var blankFibersCountedTextField: UITextField!
#IBOutlet weak var fieldsCountedTextField: UITextField!
#IBOutlet weak var sampleVolumeTextField: UITextField!
#IBOutlet weak var fiberDensityLabel: UILabel!
#IBOutlet weak var limitOfDetectionLabel: UILabel!
#IBOutlet weak var sampleResultLabel: UILabel!
#IBAction func calculateBtnPressed(_ sender: Any) {
var blankCheck: String?
var blankFibersCounted: Double?
var fibersCheck: String?
var fibersCounted: Double?
var fieldsCheck: String?
var fieldsCounted: Double?
var volumeCheck: String?
var sampleVolume: Double?
var sampleResultNumerator: Double?
var sampleResultDenominator: Double?
var sampleResult: Double?
var sampleResultRounded: Double
var fiberDensity: Double
var fiberDensityRounded: Double
var limitOfDetectionNumerator: Double
var limitOfDetectionDenominator: Double
var sampleLimitOfDetection: Double
var sampleLimitOfDetectionRounded: Double
struct TypicalSampleData {
let typicalBlankFibersCounted: Double
let typicalFibersCounted: Double
let typicalFieldsCounted: Double
let typicalSampleVolume: Double
init(typicalBlankFibersCounted: Double = 4.0, typicalFibersCounted: Double = 5.5, typicalFieldsCounted: Double = 100.0, typicalSampleVolume: Double = 555.0) {
self.typicalBlankFibersCounted = typicalBlankFibersCounted
self.typicalFibersCounted = typicalFibersCounted
self.typicalFieldsCounted = typicalFieldsCounted
self.typicalSampleVolume = typicalSampleVolume
}
}
//this code calculates correctly but crashes if text boxes left empty in the simulator.
let defaultSampleData = TypicalSampleData()
//if let blankCheck = blankFibersCountedTextField.text {
// blankFibersCounted = Double(blankCheck)
//} else {
// blankFibersCounted = defaultSampleData.typicalBlankFibersCounted
//}
//blankCheck = blankFibersCountedTextField.text
//var blankCheckDbl: Double?
//if let blankCheck = blankFibersCountedTextField.text {
// blankFibersCounted = Double(blankCheck)
//}; blankFibersCounted = defaultSampleData.typicalBlankFibersCounted
//if let fibersCheck = fibersCountedTextField.text {
// fibersCounted = Double(fibersCheck)
//} else {
// fibersCounted = defaultSampleData.typicalFibersCounted
//}
if blankFibersCountedTextField != nil {
blankCheck = blankFibersCountedTextField.text
} else {
blankFibersCounted = defaultSampleData.typicalBlankFibersCounted
}
if blankCheck == blankFibersCountedTextField.text {
blankFibersCounted = Double(blankCheck!)
} else {
blankFibersCounted = defaultSampleData.typicalBlankFibersCounted
}
if fibersCountedTextField != nil {
fibersCheck = fibersCountedTextField.text
} else {
fibersCounted = defaultSampleData.typicalFibersCounted
}
if fibersCheck == fibersCountedTextField.text {
fibersCounted = Double(fibersCheck!)
} else {
fibersCounted = defaultSampleData.typicalFibersCounted
}
if fieldsCountedTextField != nil {
fieldsCheck = fieldsCountedTextField.text
} else {
fieldsCounted = defaultSampleData.typicalFieldsCounted
}
if fieldsCheck == fieldsCountedTextField.text {
fieldsCounted = Double(fieldsCheck!)
} else {
fieldsCounted = defaultSampleData.typicalFieldsCounted
}
if sampleVolumeTextField != nil {
volumeCheck = sampleVolumeTextField.text
} else {
sampleVolume = defaultSampleData.typicalSampleVolume
}
if volumeCheck == sampleVolumeTextField.text {
sampleVolume = Double(volumeCheck!)
} else {
sampleVolume = defaultSampleData.typicalSampleVolume
}
//if let fibersCheck = fibersCountedTextField.text {
// fibersCounted = Double(fibersCheck)
//}; fibersCounted = defaultSampleData.typicalFibersCounted
//if let fieldsCheck = fieldsCountedTextField.text {
// fieldsCounted = Double(fieldsCheck)
//} else {
// fieldsCounted = defaultSampleData.typicalFieldsCounted
//}
//if let fieldsCheck = fieldsCountedTextField.text {
// fieldsCounted = Double(fieldsCheck)
//}; fieldsCounted = defaultSampleData.typicalFieldsCounted
//if let volumeCheck = sampleVolumeTextField.text {
// sampleVolume = Double(volumeCheck)
//} else {
// sampleVolume = defaultSampleData.typicalSampleVolume
//}
//if let volumeCheck = sampleVolumeTextField.text {
// sampleVolume = Double(volumeCheck)
//}; sampleVolume = defaultSampleData.typicalSampleVolume
//calculate result
if let fibersCounted = fibersCounted, fibersCounted < 5.5 {
sampleResultNumerator = (385 * (5.5 / 100))
sampleResultDenominator = (7.85 * sampleVolume!)
sampleResult = sampleResultNumerator! / sampleResultDenominator!
} else {
sampleResultNumerator = 385 * ((fibersCounted! - blankFibersCounted!) / fieldsCounted!)
sampleResultDenominator = (7.85 * sampleVolume!)
sampleResult = sampleResultNumerator! / sampleResultDenominator!
}
sampleResultRounded = Double(round(1000 * sampleResult!)/1000)
//calculate sample limit of detection
//insert if-let statement here, similar to above?
limitOfDetectionNumerator = 385 * (5.5 / 100)
limitOfDetectionDenominator = (7.85 * sampleVolume!)
sampleLimitOfDetection = limitOfDetectionNumerator / limitOfDetectionDenominator
sampleLimitOfDetectionRounded = Double(round(1000 * sampleLimitOfDetection)/1000)
//calculate fiber density
//insert if-let statement here, similar to above?
fiberDensity = ((fibersCounted! - blankFibersCounted!) / fieldsCounted!) / 0.00785
fiberDensityRounded = Double(round(1000 * fiberDensity)/1000)
fiberDensityLabel.text = String(fiberDensityRounded)
limitOfDetectionLabel.text = String(sampleLimitOfDetectionRounded)
sampleResultLabel.text = String(sampleResultRounded)
}
override func viewDidLoad() {
super.viewDidLoad()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
}
There are various ways to approach your task, but here is a very straight-forward method (replace fieldA, fieldB, etc with your actual #IBOutlet names):
#IBAction func calculateBtnPressed(_ sender: Any) {
let defaultSampleData = TypicalSampleData()
// get the text from fieldA - set it to "" if fieldA.text is nil
let textA = fieldA.text ?? ""
// get the text from fieldB - set it to "" if fieldB.text is nil
let textB = fieldB.text ?? ""
// if textA CAN be converted to a Double, use it, otherwise use default value
let valA = Double(textA) ?? defaultSampleData.typicalBlankFibersCounted
// if textB CAN be converted to a Double, use it, otherwise use default value
let valB = Double(textB) ?? defaultSampleData.typicalFibersCounted
// set resultLabel.text to the product of valA and valB
resultLabel.text = "Result: \(valA * valB)"
}
The idea being: make sure you have default values for each part of your "equation".
Depending on your actual use-case, of course, you may want to display an Error popup message instead of using default values. But this should get you on your way.

iPhone Accelerometer forward,back,left,right

I would like to know how to create code that would allow me to place an iPhone flat on a surface and move it forward to get positive y value, back to get -y, left to get -x and right to get -x. The code I am using only seems to measure positive acceleration and I can not generate negative values while the phone is laid flat on surface. If I lift the phone and then tilt it to the right or left, I will get positive x and negative x and same with y but I want the same affect while leaving it on the surface and simply shifting the phone in left and right directions while keeping it facing forward without any type of turning or rotation. I am using swift but will also be happy to use object c if there is a solution.
import UIKit
import CoreMotion
class ViewController: UIViewController {
var currentMaxAccelX: Double = 0.0
var currentMaxAccelY: Double = 0.0
var currentMaxAccelZ: Double = 0.0
let motionManager = CMMotionManager()
#IBOutlet var accX : UILabel? = nil
#IBOutlet var accY : UILabel? = nil
#IBOutlet var accZ : UILabel? = nil
#IBOutlet var maxAccX : UILabel? = nil
#IBOutlet var maxAccY : UILabel? = nil
#IBOutlet var maxAccZ : UILabel? = nil
override func viewDidLoad() {
super.viewDidLoad()
motionManager.accelerometerUpdateInterval = 0.1
//[self.motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue new] withHandler:^(CMDeviceMotion *motion, NSError *error)
motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue.currentQueue(), withHandler: {(accelerometerData: CMAccelerometerData!, error:NSError!)in
self.outputAccelerationData(accelerometerData.acceleration)
if (error != nil)
{
println("\(error)")
}
})
}
func outputAccelerationData(acceleration:CMAcceleration)
{
// Swift does not have string formation yet
accX?.text = NSString(format:"%.4f", acceleration.x) as String
if fabs(acceleration.x) > fabs(currentMaxAccelX)
{
currentMaxAccelX = acceleration.x
}
accY?.text = NSString(format:"%.4f", acceleration.y) as String
if acceleration.y > currentMaxAccelY
{
currentMaxAccelY = acceleration.y
}
accZ?.text = NSString(format:"%.4f", acceleration.z) as String
if acceleration.z > currentMaxAccelZ
{
currentMaxAccelZ = acceleration.z
}
maxAccX?.text = NSString(format:"%.4f", currentMaxAccelX) as String
maxAccY?.text = NSString(format:"%.4f", currentMaxAccelY) as String
maxAccZ?.text = NSString(format:"%.4f", currentMaxAccelZ) as String
}

Resources