Flurry / Google Analytics / Localytics bandwidth consumption on iOS - ios

I'm choosing an analytics service for my iOS app. I want to track quite a lot of events and the app I'm developing is going to be used outdoors, so there will be no wi-fi connection available, and even the cellular connectivity can be of a poor quality.
Analytics is the only thing that requires network connectivity in my app. Recently I've checked how much traffic it consumes, and it consumes much more than I've expected. That was about 500KB for Google Analytics and about 2MB for Flurry, and that's just for a 2-minute long session with a few hundred events. It seems very inefficient to me. (Flurry logs a little bit more parameters, but definitely not 4 times more.)
I wonder — have anybody compared other popular analytics solutions for their bandwidth consumption? Which one is the slimmest one?
Thank you

If you don't need real time data (and you probably don't with outdoor app), you can get the best network compression for Analytics by dispatching more hits at once to benefit from batching and compression. To do that set the dispatch interval to 30 minutes. The maximum size of uncompressed hit that analytics will accept is about 8k so you should be sending less then that. With compression that would bring it down to ~25% of the original size for individual hit assuming mostly ascii data. To generate 500k of data you should be sending few hundred hits individually. With batching and compression the hits will shrink down more efficiently. Usually batch of 20 hits will compress to less then 10% of the uncompressed size or about 800 bytes per hit at most. For further network savings just send less data per event or fewer events. Btw, Analytics has a rate limit of 60 tokens that are replenished at a rate of 1 hit every 2 seconds. If you are sending few hundred events in short period of time your data is likely getting rate limited.
https://developers.google.com/analytics/devguides/collection/ios/limits-quotas#ios_sdk

Related

Why do you use `stream` for GRPC/protobuf file transfers?

I've seen a couple examples like this:
service Service{
rpc updload(stream Data) returns (google.protobuf.Empty) {};
rpc download(google.protobuf.Empty) returns (stream Data) {};
}
message Data { bytes bytes = 1; }
What is the purpose of using stream, does it make the transfer more efficient?
In theory - yes - I obviously wan't to stream my file transfers but that's what happens over a connection... So, what is the actual benefit to this keyword, does it enforce some form of special buffering to reduce some overhead? Either way, the data is being transmitted, in full!
It's more efficient because, within a single call, multiple messages may be sent.
This avoids, not only re-establishing another (hopefully TLS i.e. even more work) connection with the server but also avoids spinning up client and server "stubs"; both the client and server are ready for more messages.
It's somewhat similar to being connected on a telephone call with your friend who, before hanging up, says "Oh, another thing...". Instead of hanging up the call and then, 5 minutes later, calling you back, interrupting dinner and causing you to pause a movie.
The answer is very similar to the gRPC + Image Upload question, although from a different perspective.
Doing a large download (10+ MB) as a single response message puts strong limits on the size of that download, as the entire response message is sent and processed at once. For most use cases, it is much better to chunk a 100 MB file into 1-10 MB chunks than require all 100 MB to be in memory at once. That also allows the downloader to begin processing the file before the entire file is acquired which reduces processing latency.
Without streaming, chunking would require multiple RPCs, which are annoying to coordinate and have performance complications. Because there is latency to complete RPCs, for reasonable performance you either have to do many RPCs in parallel (but how many?) or have a large batch size (but how big?). Multiple RPCs can also hit colder application caches, as each RPC goes to a different backend.
Using streaming provides the same throughput as the non-chunking approach without as many headaches of normal chunking approaches. Since streaming is pipelined (server can start sending next chunk as soon as previous chunk is sent) there's no added per-chunk latency between the client and server. This makes it much easier to choose a chunk size, as there is a wide range of "reasonable" sizes that will behave similarly and the system will naturally react as network performance varies.
While sending a message on an existing stream has less overhead than creating a new RPC, for many users the difference is negligible and it is generally better to structure your RPCs in a way that is architecturally beneficial to your application and not just to eek out small optimizations in gRPC. The reason to use the stream in this case is to make your application perform better at lower complexity.

Download performance of AVAssetDownloadTask

I'm using AVAssetDownloadTask to download some FairPlay-encrypted audio. As per guidelines, the audio is split up into small chunks to allow switching between bitrates during streaming. Our chunks are about 6 seconds each, which means less than 100 kb in size.
The download speed of this process is pretty bad. I've seen speeds between 85 KB/s and 250 KB/s. This is on a connection where when I download a new Xcode beta, I get several megabytes per second.
I'm guessing that the slow speed is due to having to make a separate request for each segment, which is a lot of overhead. I've tried inspecting the download traffic using Charles, and even though it shows one HTTPS connection per download task, the request body size continually ticks upward over the lifetime of the download. I tried downloading a 100MB test file from the same server where the audio files live and it came down at a few megabytes per second.
My question: what are best practices for getting good download performance using AVAssetDownloadTask? Should the segments be larger? Should there be a separate file that is one large chunk for downloading? Or is this behavior weird, suggesting I've got something configured wrong?

Is a high `min_replan_interval` a good idea for a low traffic app?

We have an app with a Neo4j backend that receives a relatively low amount of traffic, max a few hundred hits per hour. I'm wondering if the default value of 1 second for dbms.cypher.min_replan_interval will mean that all our query plans will be replanned between calls and whether we might see better performance if we increased it.
Also, are there any dangers to increasing this? If the structure of our data doesn't change all that much will it not then be a good idea to keep the query plans for as long as possible?

How to know the speed of internet in ios programmatically

I am developing the application in which every things depends on internet.my Requirement is that when the speed of internet is low,app should display alert to user that "your internet speed is low, that's why this feature is not available to you."
I want to know that is there any feature available in ios from which we can know that our internet connection is low or speed of internet, I have found 2-3 answers about this but not get any feasible solution.
There is no magic call to know whether the Internet connection is fast or slow. There's only one solution - transfer data and time how long it takes to transfer a certain amount of data.
The problem is that the next chunk of data can be much slower, much faster, or about the same.
So you really need some sort of threshold where if the app is unable to transfer at least X number of bytes in Y seconds, then you stop the transfer and alert the user.
In other words, there's no simple way to ask "Is the connection fast or slow?".

iPad Cookies - Possible to burn out flash memory?

I am currently writing a full screen web application for the iPad. This application will likely write a small amount of data to a local cookie at an interval of once per second. A user will interact with the application for about 20 to 30 minutes at a time. I imagine they will use this application between one and four times a week so a maximum of 120 minutes a week or 7,200 cookie writes per week.
I was unable to find information regarding the number of write cycles that the iPad's internal flash memory can handle. I am concerned that my frequent cookie writes may shorten the lifespan of the iPad's internal flash memory. Is this a valid concern? If not, why not and at what number of writes should I be concerned?
I am not completely opposed to using other storage methods, such as HTML5 local storage if this mitigates the risk. However, I would prefer to use cookies as part of this application will be delivered on other browsers where HTML5 local storage is not supported.

Resources