How can I obtain the mosaic image for a playlist using CocoaLibSpotify? - cocoalibspotify-2.0

From the header documentation of SPPlaylist for it's image property:
Returns the custom image for the playlist, or nil if the playlist
hasn't loaded yet or it doesn't have a custom image
I have an array of loaded SPPlaylists however the image property on each object is always nil, even though I can see the 4-up image on those same playlists via the Spotify client.
Is there an easy way to obtain that 4-up cover image using CocoaLibSpotify? Or do I have to load all track and album metadata and pull back relevant SPImages individually?

The image of a playlist is for when branded playlists have custom images. This is fairly rare, though.
The reason the grid isn't generated for you is because it's generated locally rather than server-side, so it'd mean loading multiple album's worth of images every time a playlist is loaded, which isn't that memory efficient.
However, there's an open-source Spotify client called Viva built on CocoaLibSpotify (disclosure: written by me) that generates these images. Have a look at the VivaImageExtensions class extension for a reference implementation.
It's worth nothing that the reference implementation there requires that the tracks you pass have had their album cover arts loaded first.

Related

Difference between three firebase storage download methods

I couldn't find resources discussing the difference between the three download methods in the firebase storage documentation and pros/cons of each. I would like some clarification about the firebase storage documentation.
My App
Displays 100 images ranging from 10 KB-500 KB in size on a table view
Will be used in a location where internet connection and/or phone service could be very weak
Could be used by many users
3 methods for downloading from Firebase storage
Download to NSData in memory
This is the easiest way to quickly download a file, but it must load entire contents of your file into memory. If you request a file larger than your app's available memory, your app will crash. To protect against memory issues, make sure to set the max size to something you know your app can handle, or use another download method.
Question: I tried this method to display 100 images that were 10KB-500KB in size on my table view cells. Although my app didn't crash, as I scrolled through my table, my memory usage increased to 268 mb. Would this method not be recommended for displaying a lot of images?
Download to an NSURL representing a file on device
The writeToFile:completion: method downloads a file directly to a local device. Use this if your users want to have access to the file while offline or to share in a different app.
Question: Does that mean all images from firebase storage will be downloaded on user's phone? Does that mean that the app will be taking up a large percentage of the available storage on the phone?
Generate an NSURL representing the file online
If you already have download infrastructure based around URLs, or just want a URL to share, you can get the download URL for a file by calling the downloadURLWithCompletion: method on a storage reference.
Question: Does this method require a strong internet connection and/or phone service connection to work?
Generally, your memory usage should not be affected by the method of retrieval. As long as you're displaying the 100 images, their data will be stored in the memory and should have the same size if they're identically formatted/compressed.
Either way you go with, I suggest you implement pagination (for your convenience, this question's answer might serve as a good implementation reference/guide) to possibly decrease the memory and network usage.
Now, down to comparing the methods:
Method 1
...but it must load entire contents of your file into memory.
This line might throw some people off thinking it's a
memory-inefficient solution, when all it really means is that you
cannot retrieve parts of the data, you can only download the entire
file. In the case of storing images, you probably would want that for
the data to make sense.
If your application needs to download the images every time the users
access it (i.e if your images are regularly updated), then this
method will probably suit you best. The images will be downloaded
every time the application starts, then they'll get discarded when
you kill it.
You stated that a part of your user base might have a weak internet
connection and so the next method might be more efficient and
user-friendly
Method 2
First off, the answers to your questions:
Yes. The images downloaded using this method will be stored on the users' devices.
The images should take up about the same size they're taking on Firebase storage.
Secondly, if you plan to use this method, then I suggest you store a
timestamp (or any sort of marker) in your database for when the last
change to the images occurred. Then, every time the app opens up, do
the following flow:
If no images are downloaded -> download images and store the database timestamp locally
If the local timestamp does not equal the timestamp on the database -> download images and store the new timestamp locally
Else -> use the images you already have, they should be identical to the ones in Firebase storage
That would be the best way to go if your network usage priority is
higher than that of the local storage.
And finally...
Method 3 (not really)
This is not a data download method, this simply generates a
download URL given a reference to the child. You can then use that
URL to download the data in your app or elsewhere as long as the used
app or API is authorized to access your Firebase storage.
Update:
The URL is generated from a Firebase reference (FIRDatabase.database().reference().child("exampleReference")) and would look like this: (Note: this is a fake link that will not actually work, just used for illustration purpose)
https://firebasestorage.googleapis.com/v0/b/projectName.appspot.com/o/somePathHere%2FchildName%2FsomeOtherChildName%2FimageName.jpg?alt=media&token=1a8f83a7-95xf-4d3s-nf9b-99a274927bcb
If you simply try to access that link you generate through any regular web-browser (assuming you don't have any Firebase rule that conflicts with that in your project), you can directly download that image from anywhere, not just through your app.
So in conclusion, this "Method" does not download data from Firebase storage, it just returns a download URL for your data in case you want a direct link.

How to cache images in Meteor?

I'm building a mobile app using Meteor. To allow for offline usage of the app, I want the app to be able to download a large-ish json file while online, then access the data in the json file, written to MongoDB, while offline.
This works fine. However, in the downloaded json file, there are plenty of references to online images that won't display in the app once the app is offline.
So, I want to be able to download (a selection of) the images referenced in the json file to the app, so that the app can access them even when offline.
(Downloading images could happen in the background for as long as a connection is available.)
There's an implementation of imgCache.js available on Atmosphere, which fails to initialize for me.
I suppose it's theoretically possible to individually load each image to a canvas, save the canvas content to MongoDB, then load the content when needed. Info on some of this is here. But, this feels rather convoluted and, if really feasible, I would expect someone to have done this before with success.
How can I do achieve caching of images for offline use in Meteor?
So, you've probably already read this article about application cache.
If the images are static, you can just include them in the manifest. Be sure you understand the manifest and cache expirations (see the article).
If the images are dynamic, you'll find some techniques to store images in local storage
If that's the case, this may be what you want.

Delete unused persistent data without reference

I have an app which communicates with a server. In this app I have a tableview in which I display several people from my company (their first and last name and their profile image).
Every time the tableview opens or needs to refresh, I fetch the user list from my server. These users will all have an image_name, which I try to look up in an array on the app itself. If I can't find it there, I load it from the documents dir, if I can't find it there either I download it from my server and save in locally on the device to prevent future downloads.
This works very well and it's a very easy way to manage the users and their images, it also makes sure that I download an image only once if several users have the same image (e.g. the company logo when they haven't uploaded an image yet).
The problem is that I don't keep a reference to these users so the app has no clue which user uses which image OR even if an image is still in use.
So when person A has image X it will be downloaded to the iPhone. If user A then changes his image to Y, the app will download and display image Y correctly. However, image X will never get deleted from the persistent data.
I ask you, the stackoverflow community, what's the best way to handle this?
Should I start keeping a reference to my users so I can also keep a reference to the old image?
Is there any way to find the timestamp of the last time and image was read from the documents dir?
Should I store the image names in coreData and all the references to them? (some kind of custom ARC logic)
...
At some point in time you have the list of used images, at this point in time you also have a list of images saved to disk. Once per day you can take this information and, on a background thread, do a comparison of the used and saved and delete them. This shouldn't require any additional data storage.
If you wanted to allow images to hang around for a while after they stop being used you can 'touch' the file (update the fileModificationDate) each time you use the file and then later you can check the modification dates of all images and delete on that basis.
You could add a prefix to the image that you download and when you fetch images, check all images in persistent storage for this prefix and then remove if there are any. You should only need to delete (maximum) one image every time your client fetches, which wouldn't be too heavy on the client.

Saving user photos and photos from web

So i have this situation with images. In one of app stages i get all user photos from his photo library. I get them as ALAsset's. I let him choose photo he wants. Then i save his chosen photo to applications directory as full size photo with HIGH_ prefix and a thumbnail of a photo with LOW_ prefix. I need this because photos have some properties like time etc. I save those properties to SQL database with a field of photo name that begins with HIGH_ or LOW_. When i need to get photos i get properties from db and then do [UIImage imageWithContentsOfFile:photoPath]. Can someone tell me how to do it more efficient because writing and getting photos like this takes some time. And on iPhone 4 i sometimes even get memory warnings. AND another question would be, how should i save photos fetched from web?
I stand corrected, instead of using core data, Apple writes,
It is better, however, if you are able to store BLOBs as resources on the filesystem, and to maintain links (such as URLs or paths) to those resources. You can then load a BLOB as and when necessary.
So you are doing it correctly, but maybe you should check out transformables. Just make sure you remove images you aren't using from memory if you are getting warnings
From documentation: https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/CoreData/Articles/cdPerformance.html
Under the section 'Large Data Objects (BLOBs)"
Another way to do it with a transformable:
Which way to store data(image)? NSData, String or Transformable
In fact, perhaps transformables are for core data at least:
How should I store UIImages within my Core Data database?

Play socket-streamed h.264 movie on iOS using AVFoundation

I’m working on a small iPhone app which is streaming movie content over a network connection using regular sockets. The video is in H.264 format. I’m however having difficulties with playing/decoding the data. I’ve been considering using FFMPEG, but the license makes it unsuitable for the project. I’ve been looking into Apple’s AVFoundation framework (AVPlayer in particular), which seems to be able to handle h264 content, however I’m only able to find methods to initiate the movie using an url – not by proving a memory buffer streamed from the network.
I’ve been doing some tests to make this happen anyway, using the following approaches:
Play the movie using a regular AVPlayer. Every time data is received on the network, it’s written to a file using fopen with append-mode. The AVPlayer’s asset is then reloaded/recreated with the updated data. There seems to be two issues with this approach: firstly, the screen goes black for a short moment while the first asset is unloaded and the new loaded. Secondly, I do not know exactly where the playing stopped, so I’m unsure how I would find out the right place to start playing the new asset from.
The second approach is to write the data to the file as in the first approach, but with the difference that the data is loaded into a second asset. A AVQueuedPlayer is then used where the second asset is inserted/queued in the player and then called when the buffering has been done. The first asset can then be unloaded without a black screen. However, using this approach it’s even more troublesome (than the first approach) to find out where to start playing the new asset.
Has anyone done something like this and made it work? Is there a proper way of doing this using AVFoundation?
The official method to do this is the HTTP Live Streaming format which supports multiple quality levels (among other things) and automatically switches between them (eg: if the user moves from WiFi to cellular).
You can find the docs here: Apple Http Streaming Docs

Resources