Is there any way for object detection in ios without classifying the objects? - ios

Purpose - To Detect an object without classifying in ios.
I have a tflite model to use in xcode but the possible ways I found are working as classifier. I tried to convert the model in CoreML too but it doesn't work properly.
Below is the code which called everytime when a frame is captured and loads the model:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let model = try? VNCoreMLModel(for: Resnet50().model) else { return }
let request = VNCoreMLRequest(model: model) { (finishedRequest, error) in
guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }
guard let Observation = results.first else { return }
DispatchQueue.main.async(execute: {
self.label.text = "\(Observation.identifier)"
print(Observation.confidence)
})
}
guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
// executes request
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}
Can anyone help me out with this?

Related

How to change pixel colors for CGImage created from CVPixelBuffer?

I have a simple function that gets CVPixelBuffer, turns it into CGImage and assign to the CALayer displayed on the screen.
func processResults(_ results: [Any]){
if let observation = (results as? [VNPixelBufferObservation])?.first {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(observation.pixelBuffer, options: nil, imageOut: &cgImage)
caLayer.contents = cgImage
if caLayer.superlayer == nil {
segmentationOverlay.addSublayer(caLayer)
}
}
}
The result is the following:
How do I get it?
lazy var request: VNCoreMLRequest = {
let model = try! VNCoreMLModel(for: espnetv2_fp16_new().model)
let request = VNCoreMLRequest(model: model) { request, error in
DispatchQueue.main.async {
if let results = request.results {
self.processResults(results)
}
}
}
request.imageCropAndScaleOption = .scaleFill
return request
}()
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:])
try! handler.perform([request])
}
}
I would like to make the black color totally transparent, and gray one change to green one. Is it possible?

Variables show up as nil - swift 4 IOS

For some reason my variables stringy and stringy are printing to the console just fine, but when I try to set them to a label, they show up as nil.
My goal is to print out the string and the float to the app view controller but this is just not working.
I think it has something to do with the viewdidload, as if its hiding the global variables. however if I try to set my label outside the viewdidload I get a declaration error.
// ViewController.swift
// Intellicam
//
import UIKit
import AVKit
import Vision
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
var stringy:String!
var stringie:Float!
override func viewDidLoad() {
super.viewDidLoad()
//here we start the camera
let captureSession = AVCaptureSession()
captureSession.sessionPreset = .photo
guard let captureDevice = AVCaptureDevice.default(for: .video) else { return }
guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {return}
captureSession.addInput(input)
captureSession.startRunning()
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.frame = view.frame
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
captureSession.addOutput(dataOutput)
// let request = VNCoreMLModel(model: VNCoreMLModel, completionHandler: VNRequestCompletionHandler)
// VNImageRequestHandler(cgImage: <#T##CGImage#>, options: <#T##[VNImageOption : Any]#>)
self.Labele.text = "Guess: \(stringy) + Certainty: \(stringie)"
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//print("Camera was able to capture a frame:", Date())
guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {return}
guard let model = try? VNCoreMLModel(for: Resnet50().model) else {return}
let request = VNCoreMLRequest(model: model){
(finishedReq, err) in
//print(finishedReq.results)
guard let results = finishedReq.results as? [VNClassificationObservation] else {return}
guard let firstObservastion = results.first else {return}
//print("Guess: \(firstObservastion.identifier) Certainty: \(firstObservastion.confidence)%")
self.stringy = firstObservastion.identifier
self.stringie = firstObservastion.confidence
print(self.stringy)
print(self.stringie)
}
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}
#IBOutlet weak var Labele: UILabel!
}
First thing is never use forcefully unwrap until you are sure about value. In your case VNCoreModelRequest can fail and your both variable will be un assigned so it will defiantly crash your app.
One more thing make sure you use proper naming convention to your label.
Your issue is you are not setting label value from result you are getting.
To fix this
var stringy:String? {
didSet {
DispatchQueue.main.async {
self.Labele.text = self.stringy
}
}
}
OR
self.stringy = firstObservastion.identifier
self.stringie = firstObservastion.confidence
DispatchQueue.main.async {
self.Labele.text = "Guess: \(stringy) + Certainty: \(stringie)"
}

How to get ImageBuffer correctly in Swift now with CMSampleBufferGetImageBuffer?

'CMSampleBufferGetImageBuffer' has been replaced by property
'CMSampleBuffer.imageBuffer'
CMSampleBufferGet.ImageBuffer doesn't work :) It seems parameters also being changed regarding to Swift 4.2.
guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
Entire function. Just in case ...
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// print("Camera was able to capture a frame:", Date())
guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
guard let model = try? VNCoreMLModel(for: ARS().model) else { return }
let request = VNCoreMLRequest(model: model) { (finishedReq, err) in
guard let results = finishedReq.results as? [VNClassificationObservation] else { return }
guard let firstObservation = results.first else { return }
print(firstObservation.identifier, firstObservation.confidence)
DispatchQueue.main.async {
self.identifierLabel.text = "\(firstObservation.identifier) \(firstObservation.confidence * 100)"
}
}
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}
Did anyone try to solve this or has a reference for the new syntax?
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
connection.videoOrientation = AVCaptureVideoOrientation.portrait
let imageBuffer: CVPixelBuffer = sampleBuffer.imageBuffer!
let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
let image : UIImage = self.convert(cmage: ciimage)
}
// Convert CIImage to CGImage
func convert(cmage:CIImage) -> UIImage {
let context:CIContext = CIContext.init(options: nil)
let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage)
return image
}

Calling multiple Requests in Swift

Working with CoreML and trying to execute two models with using the camera as a feed for image recognition. However, I can't seem to allow VNCoreMLRequest to run two models at one. Any suggestions on how to run two models on this request?
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
var fitness_identifer = ""
var fitness_confidence = 0
guard let model_one = try? VNCoreMLModel(for: imagenet_ut().model) else { return }
guard let model_two = try? VNCoreMLModel(for: ut_legs2().model) else { return }
let request = VNCoreMLRequest(model: [model_one, model_two]) { (finishedRequest, error) in
guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }
guard let Observation = results.first else { return }
DispatchQueue.main.async(execute: {
fitness_identifer = Observation.identifier
fitness_confidence = Int(Observation.confidence * 100)
self.label.text = "\(Int(fitness_confidence))% it's a \(fitness_identifer)"
})
}
guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
// executes request
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}
Here is the error when I tried to add both models as an array (when I have just the one it works):
Contextual type 'VNCoreMLModel' cannot be used with array literal
why not run two separate requests in an AsyncGroup:
let request1 = VNCoreMLRequest(model: model_one) { (finishedRequest, error) in
//...
}
let request2 = VNCoreMLRequest(model: model_two) { (finishedRequest, error) in
//...
}
//...
let group = AsyncGroup()
group.background {
// Run on background queue
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request1])
}
group.background {
// Run on background queue
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request2])
}
group.wait()
// Both operations completed here

iOS11:How can I use Vision framework track face across video?

i can track object across video ,but i can't track face.
when i use camera track face . the code print []
extension FaceTrackingViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let request = VNDetectFaceLandmarksRequest { [unowned self] request, error in
if let error = error {
self.presentAlertController(withTitle: self.title,
message: error.localizedDescription)
}
else {
print("\(request.results!)")
}
}
do {
try handler.perform([request], on: pixelBuffer!)
}
catch {
print(error)
}
}
}

Resources