How to detect if Device is rebooted in iOS? - ios

I need to identify if the device is rebooted.
Currently saving time in the database, and periodically check time interval from last boot using the following code as suggested in Apple forums:
func bootTime() -> Date? {
var tv = timeval()
var tvSize = MemoryLayout<timeval>.size
let err = sysctlbyname("kern.boottime", &tv, &tvSize, nil, 0);
guard err == 0, tvSize == MemoryLayout<timeval>.size else {
return nil
}
return Date(timeIntervalSince1970: Double(tv.tv_sec) + Double(tv.tv_usec) / 1_000_000.0)
}
But the problem with this is even without a reboot, tv.tv_sec value differs around 30 (it varies from 0 seconds to 30 secs).
Anybody have any idea about this variation? or any other better way to identify device reboot without using sysctl or other reliable sysctl.
https://developer.apple.com/forums/thread/101874?answerId=309633022#309633022
Any pointer is highly appreciated.
I searched over SO, all the answer points to the solution mentioned here. Which have the issue as I mentioned. Please don't mark duplicate.

Related

SensorKit - Fetching data don't call results delegate function

Recently we got approved by Apple to use SensorKit. Here are all prerequisites I checked:
I'm pretty sure the project is configured properly. Entitlements file contains list of all sensors, Info.plist contains NSSensorKitUsageDescription, NSSensorKitUsageDetail and NSSensorKitPrivacyPolicyURL.
When I request authorization for a sensor (e.g. accelerometer) the system dialog is presented, I approve it and it's clearly allowed in phone Settings -> Privacy -> Research Sensor & Usage Data section
I started recording for a sensor
I waited more than 24h (6 days actually)
I created SRFetchRequest (see the code below) with correct time interval
let now = Date()
let from = Date(timeInterval: -3 * 24 * 60 * 60, since: now) as NSDate
let to = Date(timeInterval: -2 * 24 * 60 * 60, since: now) as NSDate
let request = SRFetchRequest()
request.device = SRDevice.current
request.from = from.srAbsoluteTime
request.to = to.srAbsoluteTime
reader.fetch(request)
What's interesting, no error is triggered. Actually the func sensorReader(_ reader: SRSensorReader, didCompleteFetch fetchRequest: SRFetchRequest) delegate method is invoked but func sensorReader(_ reader: SRSensorReader, fetching fetchRequest: SRFetchRequest, didFetchResult result: SRFetchResult<AnyObject>) -> Bool method never gets called.
Did anyone make it working? Any ideas why it's not working?
For those interested I found an issue. I tried to collect them from my Apple Watch series 3 that are probably not supported. With series 6 (or from iPhone) it works correctly. Interestingly devices support isn't documented anywhere...

How to sync accurately enough two music sequences (Audiokit.AKAppleSequencer)?

I have 2 sequencers:
let sequencer1 = AKAppleSequencer(filename: "filename1")
let sequencer2 = AKAppleSequencer(filename: "filename2")
Both have the same bpm value.
When sequencer1 starts playing one midi track (playing it only once) I need that sequencer2 begin playing exactly after first sequencers finished. How can I achieve this ?
Note that sequencer2 looped.
Currently I have this approach but it is not accurate enough:
let callbackInstrument = AKMIDICallbackInstrument(midiInputName: "callbackInstrument", callback: nil)
let callbackTrack = sequencer1.newTrack()!
callbackTrack.setMIDIOutput(callbackInstrument.midiIn)
let beatsCount = sequencer1.length.beats
callbackTrack.add(noteNumber: MIDINoteNumber(beatsCount),
velocity: 1,
position: AKDuration(beats: beatsCount),
duration: AKDuration(beats: 0.1))
callbackInstrument.callback = { status, _, _ in
guard AKMIDIStatusType.from(byte: status) == .noteOn else { return }
DispatchQueue.main.async { self.sequencer2.play() }//not accurate
}
let sampler = AKMIDISampler(midiOutputName: nil)
sequencer1.tracks[0].setMIDIOutput(sampler.midiIn)
Appreciate any thoughts.
Apple's MusicSequence, upon which AKAppleSequencer is built, always flubs the timing for the first 100ms or so after it starts. It is a known issue in closed source code and won't ever be fixed. Here are two possible ways around it.
Use the new AKSequencer. It might be accurate enough to make this work (but no guarantees). Here is an example of using AKSequencer with AKCallbackInstrument: https://stackoverflow.com/a/61545391/2717159
Use a single AKAppleSequencer, but place your 'sequencer2' content after the 'sequencer1' content. You won't be able to loop it automatically, but you can repeatedly re-write it from your callback function (or pre-write it 300 times or something like that). In my experience, there is no problem writing MIDI to AKAppleSequencer while it is playing. The sample app https://github.com/AudioKit/MIDIFileEditAndSync has examples of time shifting MIDI note data, which could be used to accomplish this.

midi packet list dose not change timestamp

i have this code from here(Using MIDIPacketList in swift) but i can't pm the user or comment on that question, so i will ask my question.
#ephemer dude if you see my question, i love your code on midi list and it work so well, but when i change the time stamp, nothing happens, it must create some delay but it will be the same as 0 time stamp.
do anyone know how to fix this?
and how i can have the time stamp out of this extension to have that in midi event, i want to be able to change time stamp for every midi event,
to have it here:
var packets = MIDIPacketList(midiEvents: [[0x90, 60, 100]])
public typealias MidiEvent = [UInt8]
extension MIDIPacketList {
init(midiEvents: [MidiEvent]) {
let timestamp = MIDITimeStamp(0) // do it now
let totalBytesInAllEvents = midiEvents.reduce(0) { total, event in
return total + event.count
}
// Without this, we'd run out of space for the last few MidiEvents
let listSize = MemoryLayout<MIDIPacketList>.size + totalBytesInAllEvents
// CoreMIDI supports up to 65536 bytes, but in practical tests it seems
// certain devices accept much less than that at a time. Unless you're
// turning on / off ALL notes at once, 256 bytes should be plenty.
assert(totalBytesInAllEvents < 256,
"The packet list was too long! Split your data into multiple lists.")
// Allocate space for a certain number of bytes
let byteBuffer = UnsafeMutablePointer<UInt8>.allocate(capacity: listSize)
// Use that space for our MIDIPacketList
self = byteBuffer.withMemoryRebound(to: MIDIPacketList.self, capacity: 1) { packetList -> MIDIPacketList in
var packet = MIDIPacketListInit(packetList)
midiEvents.forEach { event in
packet = MIDIPacketListAdd(packetList, listSize, packet, timestamp, event.count, event)
}
return packetList.pointee
}
byteBuffer.deallocate() // release the manually managed memory
}
}
// to send use this
var packets = MIDIPacketList(midiEvents: [[0x90, 60, 100]])
MIDISend(clientOutputPort, destination, &packetList)
i figure it out, how it should work, but not quite right.
according to apple documentation, it must use mach_absolute_time() .
so i used that but i don't know how to work properly with mach_absolute_time().if anyone knows please tell me too.
if i use var timestamp = mach_absolute_time()+1000000000 it will delay like 1 min or so.if i change the 1000000000 with any number lower than this ,will not make a delay.
do anyone know how to work with mach_absolute_time()?
i saw some code on mach_absolute_time() but they use that as timer, it is acutely a timer that give time from the fist boot ,but how to give a time as a mach_absolute_time() to work with midi timestamp.
I found the answer.
set the timestamp to :
var delay = 0.2
var timestamp: MIDITimeStamp = mach_absolute_time() + MIDITimeStamp(delay * 1_000_000_000)
the delay is the time you want the midi message age to have delay.

Decoding H264: VTDecompressionSessionCreate fails with error code -12910 (kVTVideoDecoderUnsupportedDataFormatErr)

I'm getting error -12910 (kVTVideoDecoderUnsupportedDataFormatErr) using VTDecompressionSessionCreate when running code on my iPad, but not on the sim. I'm using Avios (https://github.com/tidwall/Avios) and this is the relevant section:
private func initVideoSession() throws {
formatDescription = nil
var _formatDescription : CMFormatDescription?
let parameterSetPointers : [UnsafePointer<UInt8>] = [ pps!.buffer.baseAddress, sps!.buffer.baseAddress ]
let parameterSetSizes : [Int] = [ pps!.buffer.count, sps!.buffer.count ]
var status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, parameterSetSizes, 4, &_formatDescription);
if status != noErr {
throw H264Error.CMVideoFormatDescriptionCreateFromH264ParameterSets(status)
}
formatDescription = _formatDescription!
if videoSession != nil {
VTDecompressionSessionInvalidate(videoSession)
videoSession = nil
}
var videoSessionM : VTDecompressionSession?
let decoderParameters = NSMutableDictionary()
let destinationPixelBufferAttributes = NSMutableDictionary()
destinationPixelBufferAttributes.setValue(NSNumber(unsignedInt: kCVPixelFormatType_32BGRA), forKey: kCVPixelBufferPixelFormatTypeKey as String)
var outputCallback = VTDecompressionOutputCallbackRecord()
outputCallback.decompressionOutputCallback = callback
outputCallback.decompressionOutputRefCon = UnsafeMutablePointer<Void>(unsafeAddressOf(self))
status = VTDecompressionSessionCreate(nil, formatDescription, decoderParameters, destinationPixelBufferAttributes, &outputCallback, &videoSessionM)
if status != noErr {
throw H264Error.VTDecompressionSessionCreate(status)
}
self.videoSession = videoSessionM;
}
Here pps and sps are buffers containing PPS and SPS frames.
As mentioned above, the strange thing is that it works completely fine on the simulator, but not on an actual device. Both are on iOS 9.3, and I'm simulating the same hardware as the device.
What could cause this error?
And, more generally, where can I go for API reference and error docs for VideoToolbox? Genuinely can't find anything of relevance on Apple's site.
The answer turned out to be that the stream resolution was greater than 1920x1080, which is the maximum that the iPad supports. This is a clear difference with the simulator which supports beyond that resolution (perhaps it just uses the Mac VideoToolbox libraries rather than simulating the iOS ones).
Reducing the stream to fewer pixels than 1080p solved the problem.
This is the response from a member of Apple staff which pointed me in the right direction: https://forums.developer.apple.com/thread/11637
As for proper VideoToolbox reference - still nothing of value exists, which is a massive disadvantage. One wonders how the tutorial writers first got their information.
Edit: iOS 10 now appears to support streams greater than 1080p.

Apple Watch: Why the APP forced to stop when using CMSensorRecorder?

I used CMSensorRecorder to record historical Accelerometer data, and accelerometerData(from:to:) to retrieve past data. But if I record 1 hour or more, the APP always forced stop when retrieving, I can hardly receive data of that time.
When Recording, everything is okay like Pic1, while press "Stop" to Retrieve data, it became like Pic2. The Question is: if I want to record a long time data, the Pic2 will continues very long and mostly cause the APP shutdown! Then nearly all data will lose!
&
My code below:
if CMSensorRecorder.isAccelerometerRecordingAvailable(){
Now = NSDate()
recorder.recordAccelerometerForDuration(time)
print("\n\n\nRecording Accel!...........\n")
}else{
print("Need Authorization to record!...\n")
self.EatingButton.setTitle("AuthFail")
}
// retrieve data from watch
let sensorDataList = recorder.accelerometerDataFromDate(Now, toDate: NSDate())
if (sensorDataList == nil){
SButton.setTitle("Failed")
}else{
for data in sensorDataList!
{
//Send data to iPhone
self.sendAccel("\(data.startDate) \(data.acceleration.x) \(data.acceleration.y) \(data.acceleration.z)")
}
}
Anyone can help me?
Whether there are some relationship to NSProcessInfo function?

Resources