I'm developing ARKit/Vision iOS app with gesture recognition. My app has a simple UI containing single UIView. There's no ARSCNView/ARSKView at all. I'm putting a sequence of captured ARFrames into CVPixelBuffer what then I use for VNRecognizedObjectObservation.
I don't need any tracking data from a session. I just need currentFrame.capturedImage for CVPixelBuffer. And I need to capture ARFrames at 30 fps. 60 fps is excessive frame rate.
preferredFramesPerSecond instance property is absolutely useless in my case, because it controls frame rate for rendering an ARSCNView/ARSKView. I have no ARViews. And it doesn't affect session's frame rate.
So, I decided to use run() and pause() methods to decrease a session's frame rate.
Question
I'd like to know how to automatically run and pause an ARSession in a specified period of time? The duration of run and pause methods must be 16 ms (or 0.016 sec). I suppose it might be possible through DispatchQueue. But I don't know how to implement it.
How to do it?
Here's a pseudo-code:
session.run(configuration)
/* run lasts 16 ms */
session.pause()
/* pause lasts 16 ms */
session.run(session.configuration!)
/* etc... */
P.S. I can use neither CocoaPod nor Carthage in my app.
Update: It's about how ARSession's currentFrame.capturedImage is retrieved and used.
let session = ARSession()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
session.delegate = self
let configuration = ARImageTrackingConfiguration() // 6DOF
configuration.providesAudioData = false
configuration.isAutoFocusEnabled = true
configuration.isLightEstimationEnabled = false
configuration.maximumNumberOfTrackedImages = 0
session.run(configuration)
spawnCoreMLUpdate()
}
func spawnCoreMLUpdate() { // Spawning new async tasks
dispatchQueue.async {
self.spawnCoreMLUpdate()
self.updateCoreML()
}
}
func updateCoreML() {
let pixelBuffer: CVPixelBuffer? = (session.currentFrame?.capturedImage)
if pixelBuffer == nil { return }
let ciImage = CIImage(cvPixelBuffer: pixelBuffer!)
let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage, options: [:])
do {
try imageRequestHandler.perform(self.visionRequests)
} catch {
print(error)
}
}
If what you want is to reduce the frame rate from 60 to 30, you should use the preferredFramesPerSecond property of SCNView. I'm assuming you're using an ARSCNView, which is a subclass of SCNView.
Property documentation.
I don't think the run() and pause() strategy is the way to go because the DispatchQueue API is not designed for realtime accuracy. Which means there will be no guarantee that the pause will be 16ms every time. On top of that, restarting a session might not be immediate and could add more delay.
Also, the code you shared will at most capture only one image and as session.run(configuration) is asynchronous will probably capture no frame.
As you're not using ARSCNView/ARSKView the only way is to implement the ARSession delegate to be notified of every captured frame.
Of course the delegate will most likely be called every 16ms because that's how the camera works. But you can decide which frames you are going to process. By using the timestamp of the frame you can process a frame every 32ms and drop the other ones. Which is equivalent to a 30 fps processing.
Here is some code to get you started, make sure that dispatchQueue is not concurrent to process your buffers sequentially:
var lastProcessedFrame: ARFrame?
func session(_ session: ARSession, didUpdate frame: ARFrame) {
dispatchQueue.async {
self.updateCoreML(with: frame)
}
}
private func shouldProcessFrame(_ frame: ARFrame) -> Bool {
guard let lastProcessedFrame = lastProcessedFrame else {
// Always process the first frame
return true
}
return frame.timestamp - lastProcessedFrame.timestamp >= 0.032 // 32ms for 30fps
}
func updateCoreML(with frame: ARFrame) {
guard shouldProcessFrame(frame) else {
// Less than 32ms with the previous frame
return
}
lastProcessedFrame = frame
let pixelBuffer = frame.capturedImage
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:])
do {
try imageRequestHandler.perform(self.visionRequests)
} catch {
print(error)
}
}
If I understand It correctly, you can achieve it via DispatchQueue. If you run below code, It prints HHH first then waits for 1 second then prints ABC. You can put your own functions to make it work for you. Of course change time interval from 1 to your desired value.
let syncConc = DispatchQueue(label:"con",attributes:.concurrent)
DispatchQueue.global(qos: .utility).async {
syncConc.async {
for _ in 0...10{
print("HHH - \(Thread.current)")
Thread.sleep(forTimeInterval: 1)
print("ABC - \(Thread.current)")
}
}
PS: I'm still not sure If Thread.sleep will block your process, If It is I'll edit my answer.
Related
My app runs Vision on a CoreML model. The camera frames the machine learning model runs on are from an ARKit sceneView (basically, the camera). I have a method that's called loopCoreMLUpdate() that continuously runs CoreML so that we keep running the model on new camera frames. The code looks like this:
import UIKit
import SceneKit
import ARKit
class MyViewController: UIViewController {
var visionRequests = [VNRequest]()
let dispatchQueueML = DispatchQueue(label: "com.hw.dispatchqueueml") // A Serial Queue
override func viewDidLoad() {
super.viewDidLoad()
// Setup ARKit sceneview
// ...
// Begin Loop to Update CoreML
loopCoreMLUpdate()
}
// This is the problematic part.
// In fact - once it's run there's no way to stop it, is there?
func loopCoreMLUpdate() {
// Continuously run CoreML whenever it's ready. (Preventing 'hiccups' in Frame Rate)
dispatchQueueML.async {
// 1. Run Update.
self.updateCoreML()
// 2. Loop this function.
self.loopCoreMLUpdate()
}
}
func updateCoreML() {
///////////////////////////
// Get Camera Image as RGB
let pixbuff : CVPixelBuffer? = (sceneView.session.currentFrame?.capturedImage)
if pixbuff == nil { return }
let ciImage = CIImage(cvPixelBuffer: pixbuff!)
// Note: Not entirely sure if the ciImage is being interpreted as RGB, but for now it works with the Inception model.
// Note2: Also uncertain if the pixelBuffer should be rotated before handing off to Vision (VNImageRequestHandler) - regardless, for now, it still works well with the Inception model.
///////////////////////////
// Prepare CoreML/Vision Request
let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage, options: [:])
// let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage!, orientation: myOrientation, options: [:]) // Alternatively; we can convert the above to an RGB CGImage and use that. Also UIInterfaceOrientation can inform orientation values.
///////////////////////////
// Run Image Request
do {
try imageRequestHandler.perform(self.visionRequests)
} catch {
print(error)
}
}
}
As you can see the loop effect is created by a DispatchQueue with the label com.hw.dispatchqueueml that keeps calling loopCoreMLUpdate(). Is there any way to stop the queue once CoreML is not needed anymore? Full code is here.
I suggest instead o running coreML model here in viewDidLoad, you can use ARSessionDelegate function for the same.
func session(_ session: ARSession, didUpdate frame: ARFrame) method to get the frame, you can set the flag, here to enable when you want the the model to work and when you dont.
Like this below:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// This is where we will analyse our frame
// We return early if currentBuffer is not nil or the tracking state of camera is not normal
// TODO: - Core ML Functionality Commented
guard isMLFlow else { //
return
}
currentBuffer = frame.capturedImage
guard let buffer = currentBuffer, let image = UIImage(pixelBuffer: buffer) else { return }
<Code here to load model>
CoreMLManager.manager.updateClassifications(for: image)
}
I am trying to add(move forward) 10 second song duration or minus(move backward) 10 second in Spotify player but i am really confused how to add or minus.
When i m trying to use this code the song is not changed duration
// forward button action
#IBAction func moveFrdBtnAction(_ sender: Any) {
SpotifyManager.shared.audioStreaming(SpotifyManager.shared.player, didSeekToPosition: TimeInterval(10))
}
// spotify delegate method seekToPosition
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didSeekToPosition position: TimeInterval) {
player?.seek(to: position, callback: { (error) in
let songDuration = audioStreaming.metadata.currentTrack?.duration as Any as! Double
self.delegate?.getSongTime(timeCount: Int(songDuration)+1)
})
}
We are making a music application using the same SDK in both the platforms (Android & iOS), the seekToPosition method of the Spotify SDK is working correctly in the Android version, however, it is not working in the iOS one.The delegate method calls itself but the music stops.
Can you kindly let us know why this scenario is happening, and what should we do to run it on the iOS devices as well.
Can someone please explain to me how to solve this , i've tried to solve this but no results yet.
Any help would be greatly appreciated.
Thanks in advance.
I don't use this API so my answer will be based your code and Spotify's reference documentation.
I think there are a few things wrong with your flow:
As Robert Dresler commented, you should (approximately) never call a delegate directly, a delegate calls you.
I'm pretty sure your action currently results in jumping to exactly 10 seconds, not by 10 seconds.
(As an aside, I'd suggest changing the name of your function moveFrdBtnAction to at least add more vowels)
Anyway, here's my best guess at what you want:
// forward button action
#IBAction func moveForwardButtonAction(_ sender: Any) {
skipAudio(by: 10)
}
#IBAction func moveBackButtonAction(_ sender: Any) {
skipAudio(by: -10)
}
func skipAudio(by interval: TimeInterval) {
if let player = player {
let position = player.playbackState.position // The documentation alludes to milliseconds but examples don't.
player.seek(to: position + interval, callback: { (error) in
// Handle the error (if any)
})
}
}
// spotify delegate method seekToPosition
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didSeekToPosition position: TimeInterval) {
// Update your UI
}
Note that I have not handled seeking before the start of the track, nor after the end which could happen with a simple position + interval. The API may handle this for you, or not.
You could take a look at the examples here: spotify/ios-sdk. In the NowPlayingView example they use the 'seekForward15Seconds', maybe you could use that? If you still need 10s I have added a function below. The position is in milliseconds.
"position: The position to seek to in milliseconds"
docs
ViewController.swift
var appRemote: SPTAppRemote {
get {
return AppDelegate.sharedInstance.appRemote
}
}
fileprivate func seekForward15Seconds() {
appRemote.playerAPI?.seekForward15Seconds(defaultCallback)
}
fileprivate seekBackward15Seconds() {
appRemote.playerAPI?.seekBackward15Seconds(defaultCallback)
}
// TODO: Or you could try this function
func seekForward(seconds: Int){
appRemote.playerAPI?.getPlayerState({ (result, error) in
// playback position in milliseconds
let current_position = self.playerState?.playbackPosition
let seconds_in_milliseconds = seconds * 1000
self.appRemote.playerAPI?.seek(toPosition: current_position + seconds_in_milliseconds, callback: { (result, error) in
guard error == nil else {
print(error)
return
}
})
})
}
var defaultCallback: SPTAppRemoteCallback {
get {
return {[weak self] _, error in
if let error = error {
self?.displayError(error as NSError)
}
}
}
}
AppDelegate.swift
lazy var appRemote: SPTAppRemote = {
let configuration = SPTConfiguration(clientID: self.clientIdentifier, redirectURL: self.redirectUri)
let appRemote = SPTAppRemote(configuration: configuration, logLevel: .debug)
appRemote.connectionParameters.accessToken = self.accessToken
appRemote.delegate = self
return appRemote
}()
class var sharedInstance: AppDelegate {
get {
return UIApplication.shared.delegate as! AppDelegate
}
}
Edit1:
For this to work you need to follow the Prepare Your Environment:
Add the SpotifyiOS.framework to your Xcode project
Hope it helps!
I have created a subclass of SCNNode. It is made up of few child nodes.
I have declared a method, viz. soundCasual() which adds a SCNAudioPlayer to the instance of this class. Everything is working as expected and audio is being played, when this method is called. This method is called on that node whenever that node is tapped (gesture).
Code:
class MyNode: SCNNode {
let wrapperNode = SCNNode()
let audioSource5 = SCNAudioSource(fileNamed: "audiofile.mp3")
override init() {
super.init()
if let virtualScene = SCNScene(named: "MyNode.scn", inDirectory: "Assets.scnassets/Shapes/MyNode") {
for child in virtualScene.rootNode.childNodes {
wrapperNode.addChildNode(child)
}
}
}
func soundCasual() {
DispatchQueue.global(qos: .userInteractive).async { [weak self] in
if let audioSource = self?.audioSource5 {
let audioPlayer = SCNAudioPlayer(source: audioSource)
self?.wrapperNode.removeAllAudioPlayers()
self?.wrapperNode.addAudioPlayer(audioPlayer)
}
}
}
}
Issue within Instruments (Allocations)
When I analyse my whole codebase (which is several other things), I see that whenever I tap on that node, allocation count of SCNAudioPlayer is increased by one while profiling within Instruments. But all of the increase is persistent. By definition of SCNAudioPlayer, I assumed that this the player is removed after playback, which is why the increment should be in Transient allocations, but it is not working like this. That is why I tried removeAllAudioPlayers() before adding an SCNAudioPlayer to the node, as you can see in the code for soundCasual(). But the issue remains.
Till this snapshot was taken, I had tapped on that node about 17 times, and it also shows 17 against Persistent allocations for SCNAudioPlayer.
Note: SCNAudioSource is 10, as it should be, since there are 10 audio source I am using in the app
And this is happening for all other SCNNodes in my application without fail.
Kindly help as I am not able to understand what exactly am I missing.
EDIT
As per recommended, I changed my init() as
let path = Bundle.main.path(forResource: "Keemo", ofType: "scn", inDirectory: "Assets.scnassets/Shapes/Keemo")
if let path = path , let keemo = SCNReferenceNode(url: URL(fileURLWithPath: path)) {
keemo.load()
}
func soundPlay() {
DispatchQueue.global(qos: .userInteractive).async { [weak self] in
if let audioSource = self?.audioSourcePlay {
audioSource.volume = 0.1
let audioPlayer = SCNAudioPlayer(source: audioSource)
self?.removeAllAudioPlayers()
self?.addAudioPlayer(audioPlayer)
}
}
}
Despite, allocations in Instruments show audioPlayers as persistent. Though on checking node.audioPlayers it shows that at one point, there is only one audioPlayer node attached.
EDIT
This issue appears even in a simple case when I use the boilerplate codebase attached in a Scenekit app created by default in XCode. Hence, this issue has been raised as a bug to Apple. https://bugreport.apple.com/web/?problemID=43482539
WORKAROUND
I am using AVAudioPlayer instead of SCNAudioPlayer, not exactly the same thing, but at least memory this way will not cause a crash.
I am not familiar with SceneKit, but from my experience with UIKit and SpriteKit I suspect that your use of wrapperNode and virtualScene is messing with the garbage collector.
I would try removing wrapperNode and adding everything to self (since self is a SCNNode).
Which node is being used in your scene? self or wrapperNode? And with your sample code wrapperNode is not added to self so it may or may not actually be part of the scene.
Also, you should probably be using SCNReferenceNode instead of the virtual scene thing you're using.
!!! this code has not been tested !!!
class MyNode: SCNReferenceNode {
let audioSource5 = SCNAudioSource(fileNamed: "audiofile.mp3")
func soundCasual() {
DispatchQueue.global(qos: .userInteractive).async { [weak self] in
if let audioSource = self?.audioSource5 {
let audioPlayer = SCNAudioPlayer(source: audioSource)
self?.removeAllAudioPlayers()
self?.addAudioPlayer(audioPlayer)
}
}
}
}
// If you programmatically create this node, you'll have to call .load() on it
let referenceNode = SCNReferenceNode(URL: referenceURL)
referenceNode.load()
HtH!
If you haven't found the answer already, you need to remove the SCNAudioPlayer from the node once it has completed playing:
if let audioSource = self?.audioSourcePlay {
audioSource.volume = 0.1
let audioPlayer = SCNAudioPlayer(source: audioSource)
self?.removeAllAudioPlayers()
self?.addAudioPlayer(audioPlayer)
self?.audioplayer.didFinishPlayback = {
self?.removeAudioPlayer(audioPlayer)
}
}
I have an app that I'm adding sounds to. It has a keypad and when the user taps a button, the number animates to show the user that their press went through.
However, since both are happening on the main thread, adding the following code below, the play() function causes a slight delay in the animation. If the user waits ~2 seconds and taps a keypad number again, they see the delay again. So as long as they're hitting keypad numbers under 2s, they don't see another lag.
I tried wrapping the code below in a DispatchQueue.main.async {} block with no luck.
if let sound = CashSound(rawValue: "buttonPressMelody\(count)"),
let url = Bundle.main.url(forResource: sound.rawValue, withExtension: sound.fileType) {
self.audioPlayer = try? AVAudioPlayer(contentsOf: url)
self.audioPlayer?.prepareToPlay()
self.audioPlayer?.play()
}
How can I play this audio and have the animation run without them interfering and with the audio coinciding with the press?
Thanks
I experienced a rather similar problem in my SwiftUI app, and in my case the solution was in proper resource loading / AVAudioPlayer initialization and preparing.
I use the following function to load audio from disk (inspired by ImageStore class from Apple's Landmarks SwiftUI Tutorial)
final class AudioStore {
typealias Resources = [String:AVAudioPlayer]
fileprivate var audios: Resources = [:]
static var shared = AudioStore()
func audio(with name: String) -> AVAudioPlayer {
let index = guaranteeAudio(for: name)
return audios.values[index]
}
static func loadAudio(with name: String) -> AVAudioPlayer {
guard let url = Bundle.main.url(forResource: name, withExtension: "mp3") else { fatalError("Couldn't find audio \(name).mp3 in main bundle.") }
do { return try AVAudioPlayer(contentsOf: url) }
catch { fatalError("Couldn't load audio \(name).mp3 from main bundle.") }
}
fileprivate func guaranteeAudio(for name: String) -> Resources.Index {
if let index = audios.index(forKey: name) { return index }
audios[name] = AudioStore.loadAudio(with: name)
return audios.index(forKey: name)!
}
}
In the view's init I initialize the player's instance by calling audio(with:) with proper resource name.
In onAppear() I call prepareToPlay() on view's already initialized player with proper optional unwrapping, and finally
I play audio when the gesture fires.
Also, in my case I needed to delay the actual playback by some 0.3 seconds, and for that I despatched it to the global queue. I should stress that the animation with the audio playback was smooth even without dispatching it to the background queue, so I concluded the key was in proper initialization and preparation. To delay the playback, however, you can only utilize the background thread, otherwise you will get the lag.
DispatchQueue.global(qos: .userInitiated).asyncAfter(deadline: .now() + 0.3) {
///your audio.play() analog here
}
Hope that will help some soul out there.
I have an array of times for when the background color of a UIView should change colors. The array holds values from 0-10000ms, where each index is the duration between the first change and the current change. I have implemented a loop to execute the tasks at the scheduled times, and using print statements, it seems to be working. However, the background color only changes to the last color instead of changing continuously.
I believe this is because the background color does not update until the loop is done. Is this correct, and if so, how can I fix this?
sendingView.backgroundColor = UIColor.black
let startingTime = getCurrentMillis()
var found = false
for time in receivingMsgData {
while(!found) {
let curDuration = getCurrentMillis() - startingTime
if curDuration > time {
if(sendingView.backgroundColor == UIColor.black) {
sendingView.backgroundColor = UIColor.white
print("playing at \(getCurrentMillis()-startingTime) turning white")
} else {
sendingView.backgroundColor = UIColor.black
print("playing at \(getCurrentMillis()-startingTime) turning black")
}
found = true
}
}
found = false
}
Try something like this
DispatchQueue.global(qos: .userInitiated).async {
let delay:TimeInterval = 0.01 // Seconds
for index in 0..<100 {
if index % 2 == 0 {
Thread.sleep(forTimeInterval: delay)
DispatchQueue.main.async {
self.view.backgroundColor = UIColor.white
print("white")
}
}else{
Thread.sleep(forTimeInterval: delay)
DispatchQueue.main.async {
self.view.backgroundColor = UIColor.black
print("black")
}
}
}
}
Do your action in async thread and change the color on main thread
You doing to much in main thread causing it to hang, aslo put some delay so that user can view the animation
try something like this in your case
let delay:TimeInterval = 0.01 // Seconds
sendingView.backgroundColor = UIColor.black
let startingTime = getCurrentMillis()
var found = false
DispatchQueue.global(qos: .userInitiated).async {
for time in receivingMsgData {
while(!found) {
let curDuration = getCurrentMillis() - startingTime
if curDuration > time {
if(sendingView.backgroundColor == UIColor.black) {
Thread.sleep(forTimeInterval: delay)
DispatchQueue.main.async {
sendingView.backgroundColor = UIColor.white
print("playing at \(getCurrentMillis()-startingTime) turning white")
}
} else {
Thread.sleep(forTimeInterval: delay)
DispatchQueue.main.async {
sendingView.backgroundColor = UIColor.black
print("playing at \(getCurrentMillis()-startingTime) turning black")
}
}
found = true
}
}
found = false
}
}
The code you posted either blocks the main thread or the operation is not on the main thread when setting the color. Both of the situations will most likely result in no changes until the execution of the whole loop is done or after some time even.
When you are dealing with data that are scheduled at some time which may be irrelevant to the speed of the screen update you need to give the color on demand when the screen will refresh.
In your case I would suggest using a display link CADisplayLink which is designed to trigger whenever the screen should refresh. You can even get the current time value since the beginning from the display link itself. That elapsed time can be used the same way you are already using curDuration in your code. Assuming that the rest of the code works fine that is.
I believe this is because the background color does not update until the loop is done. Is this correct
Yes, it is. Drawing doesn't happen until all your code on the main thread finishes and the CATransaction is committed. You would need to do one of two things:
Run your time-consuming code on a background thread, coming to the main thread only to talk to the interface and set the color.
Use some form of delay to get off the main thread long enough to let drawing take place before continuing with your "loop".