How do I track the amount of data sent/received on iOS? - ios

I'd like to track how much data is transferred to/from an iOS app's backend server. I'm using Apple's NSURLSessionDataTask, which automatically handles gzip decompression. I can easily get the size of the decompressed data. Is there any way to get the size of the data before decompression? Or even better, the size of all data transferred, not just the body?

Related

The most efficient way to read data from a sensor and write to a file

I am working on a system which is capable of:
Reading data from a Bluetooth sensor
Converting data into int16
Writing data in a .bin file
Displaying data on a mobile application screen
The problem that I have, since file reading and writing operations are designed as 'async', I am missing to write some data that is transfered by Bluetooth device into the file. I guess I am receiving data faster than I am writing it into the file.
The solution that I got so far is to create a circular buffer (even maybe two, 1 to write into .bin file and 1 to display on the mobile screen with a window of 10-seconds). That way, even though the writing operation will be slower, I guess I will not lose any data and monitoring might be done live(?).
Is there a more efficient way to quickly write the data received by a sensor to a file without losing it?

Save gzip compressed data to cache storage from service worker

We are trying to store some api responses in cache storage for a PWA app.We are intercepting the fetch request in service worker and are storing the responses to cache.But our uncompressed apis size is little large and we want to keep the compressed(gzip) version in the cache and uncompress it when needed.
Is there any way we can prevent the browser from automatically uncompressing the responses from server
I'm not aware of any way to do this automatically. Most servers will only compress their response bodies on the fly if the incoming request indicates that the browser supports compression, and in that case, the browser will automatically decompress the body before you have access to the compressed bytes.
You may have better luck either explicitly compressing the files on the server and downloading and storing that compressed version (i.e. fetch('asset.json.gz')), or alternatively, using a CompressionStream (which isn't widely supported outside of Chromium-based browsers) to compress your data client-side prior to storing it.

HTTP/2 Streaming and static compression

I need to implement an http2 server, both in node and in C++. Anyhow, I can't grasp how to make streaming works with static compression:
I want to compress my files with the highest compression possible, and this is done statically at build time
I want to stream my HTML, so the browser receives the <head> asap, and can either prefetch resources or retrieve them from the local cache
But files that are compressed can't be read before receiving all the data, can they?
Should I give up compression, or should I compress HTML stream chunks separately? Is there a better way?
But files that are compressed can't be read before receiving all the data, can they?
This is (generally) incorrect. Deflate based compression (e.g. gzip, brotli) as used for HTML files can be decompressed without receiving all the data.
These work mostly by back-referencing data. For example the above sentence has a repeated reference to the text “compress”:
Deflate based compression (e.g. gzip, brotli) can be decompressed without receiving all the data.
So the second instance could be replaced with a back reference to the first one:
Deflate based compression (e.g. gzip, brotli) can be de(-49,8)ed without receiving all the data.
So you can see that as long as you are reading in order (which HTTP guarantees) and from the beginning, then you don’t need any subsequent data to decompress what you’ve already received - but you do need any previous text.
Similarly JPEGs are often displayed before they are fully received, either by loading it line by line (non-progressive JPEGs), or by having a blurry image which is enhanced as more data is loaded (progressive JPEGs).

NSURLConnection consuming huge memory

I'm using NSURLConnection to interact with the server side and I have observed that when the server take time to respond the system allows about 40 mo.
I don't know if I'm the only one to have this problem.
thanks in advance.
Yes this is possible in case if your data for response is large in size. Generally what we do, we create instance of NSData and append all downloaded data to this variable.This works perfect when your data is comparatively small. If you have large data in response, the better way is to create file in Document directory and append all data to that file when connection receives data. Read this data after connection finishes loading.
This concept of saving data is applicable on android also.

Should I save my images in Core Data or should I use SDWebImage

I have been developing an app with cloud/server data source. So naturally the process is one thing at a time. So at present, to fill my tables, I query the server which returns an array. The array contains urls to images and then from the urls I load the images using SDWebImage.
Now I am entering the stage of development where I need to implement Core Data for all the data in my tables (i.e. texts and images). So far I am considering the following approaches:
I can load the array from the server into core data (imagine properties as: firstName, lastName, photoUrl, shortBio) and then pass the photo url from core data to SDWebImage to display the image in the table cells. OR
I can load the array and the image into Core Data (i.e. load the array into core data in the background and then for each row, load the image into core data)
Of course the point here is that if I use SDWebImage it will save the image in its own caching system, which from my limited understanding may or may not be entirely consistent with what is in core data. On the other hand I don't understand core data enough to know if it handles saving images well in terms of performance (i.e. knows it's an image and therefore handle the file linking).
So What is the best way to do this? Can SDWebImage work in harmony with Core Data? Or is SDWebImage redundant since core data is good enough all by itself?
Another thing to note is that presently, my data loads from server immediately and then the images come as SDWebImage loads each into its UIImageView. This may not be a problem with Core Data since ideally the image will be in the local DB. Any thoughts?
Based on your question and comments, it seems you are trying to locally cache images that were retrieved through an HTTP request.
The URL loading system is already caching the images. There is no need to implement another layer of caching on top of that, wether it be SDWebImage or CoreData. When an HTTP response is received from the server, the server includes "freshness" information that informs the client how long and under what conditions that response is valid. The URL loading system, by default, obeys those rules. You can check the freshness information of responses using a tool like Charles or REDBot. The server is the only party in this conversation that can know how long a response is valid for.
The URL loading system does not, by default, cache to the filesytem - only in-memory. This is easy to change:
cache = [[NSURLCache alloc] initWithMemoryCapacity:(1024*1024*512) diskCapacity:(1024*1024*1024 * 100) diskPath:#"Cache.db"];
[NSURLCache setSharedURLCache:cache];
Or when using NSURLSession:
cache = [[NSURLCache alloc] initWithMemoryCapacity:(1024*1024*512) diskCapacity:(1024*1024*1024 * 100) diskPath:#"Cache.db"];
[sessionConfiguration setURLCache:cache];
session = [NSURLSession sessionWithConfiguration:sessionConfiguration];
Storing image data in Core Data is something in general to be avoided, at least with the NSSQLiteStoreType persistent store. Even small images are large in the SQLite database and cause fragmentation, etc. that impact performance. If you are going to store image data in Core Data, it's preferable to use external storage - either by use Core Data's own external records storage, or by storing image data on the filesystem and referencing it in managed objects by using URLs.
However, if you are using Core Data OR SDWebImage to "cache" images, you are ignoring the freshness information that was returned in the server's response unless you implement your own validation layer - which the URL loading system is already doing for you!
Regardless of the question semantics, I think you need some more hard information to inform your decision.
It would be useful for you to know that storing large images (larger than, say, thumbnails) is not recommended and will lead to performance issues. Core Data has an option for large data where you can check "Store in External Record File". Alternatively, you can administer your own cache data (in this way you can flexibly update the images via download on a per-need basis on each device sharing the same data). The accepted best practice is to only store the image URL in Core Data (typically relative to the application directory) and handle the file storage / display separately.
I do not know SDWebImage, but this might provide some of the functionality that you need in the light of the above.
I hope this helps you make your decision about the data architecture.

Resources