I have an iOS app and we have a regular tracking network update which is using alot of power. We are looking for various ways of reducing this. Is it possible to power down netowrking hardware quicker? as in decrease the additional period of time they are turned on in anticipation of more work ?
Thanks
Feras A.
Related
We need to develop an iOS app that will make heavy use of ARSessions with ARFaceTrackingConfiguration, in more or less 100% of the screens. Our main concern is related to battery usage. We're in discovery phase, so I'm trying to find/research all possible risks related to it.
For now, what I found is:
ARKit sessions don't "drain" the battery (I did some testing with my phone), they just consume more that common apps (more or less the same amount of battery than having the iPhone camera open for a long period of time, with the addition that ARSession is continually creating frames in order to track the face -real time video processing).
I'm aware of run(_:options:) and pause() methods on ARSession, in order to pause the tracking functionality when it's not necessary.
I'm aware about lowPowerModeEnabled and we can react to changes in that property.
I searched Apple Developer for some official article that provides general considerations about battery usage in an ARKit session, but I couldn't find anything dedicated to the topic. I'm wondering if there is a really important thing that I should be aware and I'm missing when implementing the feature. It's the first time I'll work with ARKit and it's critical for the project.
Thanks in advance!
Is it possible to connect 8 realsense in one PC without interference?
Anything need for it??
I want to use them with sdk for unity project(win10).
If it is possible, please notice the product and sdk.
thank you!!
Check:
Multi-camera synchronization, IR interference, and USB bandwidth questions
Are multiple cameras supported yet?
Both give as a starting point to use the R200 camera's and librealsense. In order to have a good USB3.0 throughput and minimum frame loss, you would need a very good USB3.0 controller card. This card should be a good starting point. Startech PEXUSB3S4V, buy two of them and insert them in a PCIe 2.0 slot in your pc.
Then it's probably needed/wise to program 8 separate threads, so each Realsense gets it's own thread. So for processing power you would like to have something like a 6 core processor, able to handle up to 12 threads.
Keep in mind I haven't tested this. But if you want to build such a system, I would start in this way.
Thanks in advance,
For those familiar with IOS, I was wondering if it's possible to program it to exceed the brightness limit, and if so is it legal/safe. Thanks again.
Jon
Without altering hardware or possibly going deeper into the root system it is not going to be possible. The limits are set as they are for safety reasons by Apple themselves or hardware limitations (if they could make it brighter they likely would). However, legally you are free to try and increase the brightness all you want, it will just void your warranty.
I am working on a game for iPhone that is fully usable by providing YES / NO responses.
It would be great to make this game available to blind users, runners, and people driving cars by allowing voice control. This does not require full speech recognition, I am looking to implement keyword spotting.
I can already detect start and stop of utterances, and have implemented this at https://github.com/fulldecent/FDSoundActivatedRecorder The next step is to distinguish between YES and NO responses reliably for a wide variety of users.
THE QUESTION: For reasonable performance (distinguish YES / NO / STOP within 0.5 sec after speech stops), is AVAudioRecorder a reasonable choice? Is there a published algorithm that meets these needs?
Your best bet here is OpenEars, a free and open voice recognition platform for iOS.
http://www.politepix.com/openears/
You most likely DO NOT want to get into the algorithmic side of this. It's massive and nasty - there is a reason only a small number of companies do voice recognition from scratch.
I'd like to have an iPhone and an Arduino-based device talk to each other. Here are the requirements:
I want to fully rely on iPhone's built-in components without any peripherals (for example, HiJack).
The less configuration before the two can communicate, the better. This means a WiFi-based is not desirable, because we'll need to set up Wi-Fi credentials for the Arduino beforehand.
Bitrate is not important. Only a few bytes are exchanged.
As cheap as possible.
I see that Bluetooth 4.0 LE (for example, Stack Overflow question iPhone - Any examples of communicating with an Arduino board using Bluetooth?) meets my requirements, but are there any cheaper solutions?
One thing that came into my mind is sound - the way Chirp used to share data between two iOS devices, but I don't know if is feasible on Arduino and, if it is, how much it would be. Any other solutions?
I can think of a few options:
Bluetooth, you can get a cheap one from eBay for about $10
Wi-Fi using Electric Imp (cost around $30), which is very easy to setup using the brilliant BlinkUp technique. See the project ElectricImp, control central heating via iPhone for an example.
Chirp is a brilliant idea as well. From a hardware prospective I see it is feasible to do in Arduino; you just need a MIC circuit ($8) and speaker.
However, the real challenge is the software side, i.e., the algorithm that you will use to encode data as sound and vice versa. If such algorithm requires intensive calculation, you might not be able to do it in Arduino, and you can consider using an ARM-based microcontroller.