I want to let my device (iPhone/Watch) record the movements of the user. It seems Apple isn't providing a way to do this in "real-time". The only way I found is to use CMSensorRecorder to get "historical" data from the past. But this sensor recorder samples at 50 Hz, so there are ~50 samples per second. If you want to record the user's activity for e.g. two hours or more, then you have to process 50 * 60 * 60 * 2 = 360.000 samples. That's a real pain on the Apple Watch because of the processor.
I've seen apps on the AppStore which seem to use exactly this sensor recorder to do some analysis based on the movement data. Is there another way to get movement data from the past? Or to set the sample rate of the recorder? I have already put hours of work into my app, but I can't get the data processing fast enough. It's really slow ...
Related
I am building a video conferencing app with TokBox. I would like to give the user an indication of how well the streams are behaving. I have noticed that the OTSubscriberKitNetworkStatsDelegate lets you view how many audio and video packets a subscriber has lost. What is unclear is wether this is an indication of the health of your connection or theirs. I assume that I could use this delegate to view my own dropped packets (as a publisher AND a subscriber). Would this be the way to calculate some kind of bandwidth indicator for TokBox?
UPDATE:
Great answers and so quickly too! Impressive OpenTok community. Just to finish up here, the OTNetworkTest is awesome and actually uses the OTSubscriberKitNetworkStatsDelegate to calculate the quality of the stream as I suspected. The only issue with it, is that it is designed to run before you start your session. I need a test that can run as part of the existing session; so, I am going to strip out the calculation parts and create a version of this class that uses your own subscriber data. Thanks for all the help folks.
Well actually there are a few approaches.
Naive soultion
A rough yet Simply calculate the size of a frame and multiply it by the framerate(Real one, not nominated) and after that add the kbps of the sound. You should get quite accurate picture of actual bandwidth.
For frame rate calculation read about Dynamic frame rate controls
OpenTok approach (The legit one)
I bet that a good User experience solution would be not to show that everything's bad, but to adjust the stream quality, indicating errors only in case of a total faiure(Like Skype does). Look at this:
Starting with our 2.7.0 mobile SDK release, you can start a publisher
with per-determined video resolution and frames per seconds (fps).
Before using the API, you should be aware of the following:
Though HD video sounds like a good idea at first, from a practical
standpoint you may run into issues with device CPU load on low to
medium range devices. You may also be limited by the user’s
available bandwidth. Lastly, data charges for your users could run
high.
Available on the device. The actual empirical values for these parameters will vary based on the specific device. Your selection
can be seen as a maximum for the resolution and frame rate you are
willing to publish.
Automatically adjusted based on various parameters like a user’s packet loss, CPU utilization, and network bandwidth/bit-rate. Rather
than attempting to do this dynamically on your own, we recommend
picking meaningful values and allowing OpenTok to handle the fine
tuning.
Bandwidth, set your publisher video type property to “screen” instead of the default “camera” value.
Taken from here
So, here's what you should do:
Implement <OTSubscriberKitNetworkStatsDelegate> protocol first. It has a method called
- (void)subscriber:(OTSubscriberKit *)subscriber videoNetworkStatsUpdated:(OTSubscriberKitVideoNetworkStats *)stats
Which as you can see has a OTSubscriberKitVideoNetworkStats object passed to it.
Next, you can extract three properties from this object:
#property (readonly) uint64_t videoPacketsLost - The estimated number of video packets lost by this subscriber.
#property (readonly) uint64_t videoPacketsReceived - The number of video packets received by this subscriber.
#property (readonly) uint64_t videoBytesReceived – The number of video bytes received by this subscriber.
#property (readonly) double timestamp – The timestamp, in milliseconds since the Unix epoch, for when these stats were gathered.
So, feel free to play around with these values and implement the best solution for your app.
Moreover, they have published an article specially adressed towards managing different bandwidth on conference calls. Check it out.
UPD:
While I was writing the answer #JaideepShah mentioned an amazing example. Read throughly the explanation for this example. There is a table indicating proper resolution for right values I mentioned above.
It would be the health of your network connections to the TokBox platform/cloud.
The code at https://github.com/opentok/opentok-network-test shows you how to calculate the audio and video bitrate and this could be used as an indicator.
You are calculating the subscriber stats and not the publisher stats.
I am trying to create a RTSP client which live broadcast Audio and Video. I modified the iOS code at link http://www.gdcl.co.uk/downloads.htm and able to broadcast the Video to server properly. But now i am facing issues in broadcasting the audio part. In the link example the code is written in such a way that it writes the Video data to file and than reads the data from the file and upload the NALU's video packets to RTSP server.
For Audio part i am not sure how to proceed on it. Right now what i have tried is that get the audio buffer from mic and than broadcast it to the server directly by adding RTP headers and ALU.. but This approach is not properly working as Audio starts lagging behind and lag increases with time. Can someone let me know if there is some better approach to achieve this and with lip sycn audio/video.
Are you losing any packets on the client? If so, you need to leave "space." If you receive packet 1,2,3,4,6,7, You need to leave space for the missing packet (5).
The other possibility is a what is known as a clock drift problem. The clock (crystal) on your client and server are not perfectly in sync with each other.
This can be caused by environment, temperature changes, etc.
Let's say in a perfect world your server is producing audio samples 20ms audio samples at 48000 hz. Your client is playing them back using a sample rate of 48000 hz. Realistically your client and server are not exactly 48000hz. Your server might be 48000.001 and your client might be 47999.9998. So your server might be delivering faster than your client or vise versa. You would either consume packets too fast and under run the buffer or lag too far behind and overflow the client buffer. In your case, it sounds like the client is playing back too slow and slowly lagging behind the server. You might only lag a couple milliseconds per minute but the issue will keep continuing and it will look like a 1970s lip synced Kung Fu movie.
In other devices, there is often a common clock line to keep things in sync. For example, Video camera clocks, midi clocks. multitrack recorder clocks.
When you deliver data over IP, there is no common clock shared between a client and server. So your issue concerns syncing clocks between disparate devices with no. I have successfully solved this problem using this general approach:
A) Let the client count the rate of packets that come in over a period of time.
B) Let the client count the rate that the packets are consumed (played back).
C) Adjust the sample rate of the client based on A and B.
So your client requires that you adjust the sample rate of the playback. So yes you play it faster or slower. Note that the playback rate change will be very very subtle. You might set the sample rate to be 48000.0001 hz instead of 48000 hz. The difference in pitch would be undetectable by humans as it would only cause a fraction a cent difference in pitch. I gave an explanation of a very simplified approach. There many other nuances and edge cases that must be considered when developing such a control system. You don't just set it and forget it. You need a control system to manage the playback.
An interesting test to demonstrate this is to take two devices with the exact same file. A long recording (say 3 hours) is best. Start them at the same time. After 3 hours of playback, you will notice that one is ahead of the other.
This post explains that it is NOT a trivial task to stream audio and video.
I'm making an iPhone game that runs at 50 fps per second. I'm thinking of implementing multiplayer in my game using Game center. But first I have a question how fast I can send data using gamecenter. I will only send a struct with three floats. Is it possible to send data fast enought to make me receive data every 20ms (1/50)?
It could at best take around 15 milliseconds send data.
The only thing you can depend on when sending stuff across the network from a mobile device is that the connection will be slow and intermittent.
If you assume anything else then you will run into problems.
You should always program for the case where the data takes a "long" time to come through and may not come through at all.
For instance, if you're making a real time multiplayer game then have some way of the opponents character moving in a "best guess" way until the next bit of data comes up. Etc...
Also, your games should be running at 60fps not 50.
Basically, for my team's app, we need to be able to synchronize music across multiple iOS devices. The first way we did this was by having the music on all the devices already and just sending a play command to all the devices. Some would get it later than others, so that method did not work. There was an idea mentioned to calculate the latency between all the devices and send the commands at the appropriate times based on the latency.
The second way proposed would be to stream the music. If we were to implement streaming, how should we go about doing it. Should Audio Units be used, OpenAL, etc.? Also, if streaming was being done, how would we go about making sure that each device's stream was in sync.
Basically, the audio has to be in sync so that the person hearing it cannot differentiate between the devices. A few milliseconds off should not be a problem (unless the listener has super-human hearing).
You'd be amazed at how good the human ear us at spotting audio anomalies...
Sync the time of day
Effectively your trying to meet a real time requirement with a whole load if very variable things in the way (wifi, etc). I strongly suspect the only way you're going to get close to doing this is to issue a 'play' instruction that includes a particular time to start playing. Of course that relies on all the clocks being accurately set.
NTP
I don't know how iPhones get their time of day. If they use (or could use) NTP then you'll be getting close. NTP is designed to convey accurate time of day information over a network despite variable network delays. I've had a quick look and it seems that most NTP clients for iOS are the simple ones, not the full NTP that measures and tunes out network delays, clock drifts, etc.
GPS
Alternatively GPS is also a very good source of time information. Again I don't know if iPhones can or do use GPS for setting their clock but if it could be done then that would likely be pretty good. On Solaris (and I think Linux too) the 1 pulse per second that most GPS chips generate from the GPS signal can be used to condition the internal OS clock, making it very accurate indeed (sub microsecond accuracy).
I fear that iPhones don't do either of these things natively; both involve using a fair bit of electricity, so I wouldn't be surprised if they did something else less sophisticated.
Cell Time Service
Some Cell networks provide a time service too, but I don't think it's designed for accurate time setting. Also it tends not to be available everywhere. You often find it at major airports so that recent arrivals get their phones set to something close to local time.
Play at time X
So if one of those could be used to ensure that all the iPhones are set to exactly the same time of day then all you have to do is write your software to start playing at a specific time. That will probably involve polling the clock in a very tight loop waiting for it to tick over; most OSes don't provide a means of sleeping until a specific time. They do at least allow for sleeping for a period of time, which can be used to sleep until close to the appointed time. You'd then start polling the clock until the right time is reached.
Delay Measurement and Standard Deviation
Your first method is doomed I think. You might be able to measure average delays and so forth but that doesn't mean that every message has exactly the same latency. The standard deviation in the latency will tell you what you can expect to achieve, and I don't think that's going to be particularly small. If so then the message has got to include a timestamp.
NTP can work because it's only interested in the average delay measured over a period of time (hours sometimes), whereas you're interested in instantaneous delay.
Streaming with RTP
Your second method may work if you can time sync the devices as discussed above. The RTP protocol was designed for use in these circumstances; it doesn't help with achieving sync, but it does help a lot with the streaming. It tells you whereabouts in the stream any one piece of received data fits, allowing you to play it at the right time.
Clock Drift
Another problem to deal with is how long you're playing for. If it's a long time then you may discover that the 44kHz (or whatever) audio clock rate on each device isn't quite the same. So, whilst you might find a way of starting to play all at the same time, the separate devices will then start diverging ever so slightly. Over a long period of time they may be noticeably out.
BlueTooth
It might be possible to do something with BlueTooth. It has many weird and wonderful profiles, and it might be that one of those would serve to send an accurate 'start now' message.
Audio Trigger
You might also use sound as a means of conveying a start signal. One device can play a particular sound whilst your software in the others is listening with the mic. When a particular feature is detected in the sound, that's the time for everyone to start playing. Sort of a computerised "1, 2, a 1 2 3 4".
Camera Flash
Should be easy to spot in software...
I think your first way would work if you expand it a little bit. Assuming all the clocks on the devices are in sync you could include a timestamp in your play command. Then each device would calculate the time between the timestamp and when it received the command. You would then play the music and offset it by the time difference.
Currently I put the tracking code in viewWillAppear for tracking page views.
I found out that if I quickly switch views back and forth without contents being fully loaded, the tracker still sends the traffic as many times as the number of view switches.
Can I prevent this kind of spam on my analytics report?
Where is the best place to put the tracking code for tracking iPhone's page views.
Set the Sampling Rate:
You can set the sample rate using the property sampleRate. If your application generates a lot of Analytics traffic, setting the sample rate may prevent your reports from being generated using sampled data. Sampling occurs consistently across unique visitors, so there is integrity in trending and reporting when sample rate is enabled. The sampleRate parameter is an NSUInteger and can have value between 0 and 100, inclusive. Here is an example that turns the sampleRate down to 95%:
[[GANTracker sharedTracker] setSampleRate:95];
A rate of 0 turns off hit generation, while a rate of 100 sends all data to Google Analytics. It's best to set sampleRate prior to calling any tracking methods.
You can learn more about sampling from the Sampling Concepts Guide.
From: http://code.google.com/mobile/analytics/docs/iphone/#overview
If you're really worried about that, make an NSTimer that fires in say 5 seconds, and send traffic when it fires. Invalidate the timer when your view disappears.
viewDidAppear might work. It may have the same problem though.