I would like to limit the microphone's input frequency for an iOS Application I'm developing which is going to run on all versions of iPhone available in market as of 22 March, 2016.
The microphone will only record if a certain frequency threshold is reached and won't be recording below that threshold.
Any idea on how to achieve this programmatically?
Related
We have used Webrtc & ARKit to implement video communication in our iOS app. Using RTCVideoCapturer, we send the customized frame to the WebRTC. While sending the frame via WebRTC, we first capture a screenshot of the View, then get the CgImage and generate the pixel buffer.
This works perfectly, but after certain interval of time the device starts to heat up. An increase in CPU usage causes a heating problem.
What modifications can be made to reduce CPU usage?
CPU Utilization & Thermal State Changes
I am trying to implement a simple recorder synced with a background track using AudioKit 5.
Unfortunately I faces latency issue when playing back the recorded file, along with the background track.
The latency values returned by the AVAudioSession.sharedInstance() are around 0,01 sec for input and output, when running on the simulator of my macbook pro.
However, the latency measured between the background track and the recorded one is around 1.4 sec.
Do you have any idea of what could cause this huge latency ?
And how to compute it with accuracy, so I can start the recorder late to compensate it ?
The ARKit API supports simultaneous world and face tracking via the back and front cameras, but unfortunately due to hardware limitations, the new iPad Pro 2020 is unable to use this feature (probably because the LIDAR camera takes a lot more power). This is a bit of a step back.
Here is an updated reference in the example project:
guard ARWorldTrackingConfiguration.supportsUserFaceTracking else {
fatalError("This sample code requires
iOS 13 / iPad OS 13, and an iOS device with
a front TrueDepth camera. Note: 2020 iPads
do not support user face-tracking while world tracking.")
}
There is also a forum conversation proving that this is an unintentional hardware flaw.
It looks like the mobile technology is not "there yet" for both. However, for my use case I just wanted to be able to switch between front and back tracking modes seamlessly, without needing to reconfigure the tracking space. For example, I would like a button to toggle between "now you track and see my face" mode and "world tracking" mode.
There are 2 cases: it's possible or it's impossible, but maybe there are some alternative approaches depending on that.
Is it possible, or would switching AR tracking modes necessitate setting-up the tracking space again? If so, how would it be achieved?
If it's impossible:
Even if I don't get face-tracking during world-tracking, is there a way to get a front-facing camera feed that I can use with the Vision framework, for example?
Specifically: how do I enable back-facing tracking and get front and back facing camera feeds simultaneously, and disable one or the other selectively? If it's possible even without front-facing tracking and only the basic feed, this will work.
I am a beginner iOS developer working as a developer on a research project. We want to be able to take several photos within a couple seconds, but also want full control on adjusting the frequency of photos taken per second.
What are the limitations put in place of the number of photos the iPhone can take per second via the regular camera (not burst or tkmelapse)? Is there a maximum value?
Or is there a way to control the frequency of capturing in video?
i want to make Step Counter in Blackberry OS 7. I want my app count every user motion. But i don't know how to detect vibration / motion using Accelerometer like SensorEvent in Android. Anybody can help me ?