I'm using a PHPickerViewController to allow the user to select a media item from their library. And the specific case of interest is when they select a video.
I get a callback on the PHPickerViewControllerDelegate delegate method:
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult])
And I'm using loadFileRepresentation to get the video from the NSItemProvider.
However, that call takes a substantial amount of time. And it may be all for nothing -- since I'm imposing a size limit on the videos that users can add to the app.
So, my question is: Is there a way to determine, quickly, the size of the media item that the user has selected? E.g., given the NSItemProvider?
You're looking in the wrong place. What you're starting with, even before you have an item provider, is a PHPickerResult. It gives you the asset identifier. From there you can obtain from the photo library the PHAsset and find out anything the photo library is willing to tell you. Size is an amorphous concept, but you can certainly get the video pixel dimensions and its duration instantly, which will tell you a lot.
Of course you'll need photo library read permission to go that route, which you don't need merely to save the video to a file.
On the other hand, since videos are huge, one might ask what you care about the size. The key thing to do with a video is not to obtain its entirety as a lump of data but to play it, and the photo library will let you do that instantly.
I would also try loadInPlaceFileRepresentation if possible, it will be much faster.
Related
I am currently trying to grab all user information using the Microsoft Graph API and I was wondering if there was a more concise way of also grabbing a users photos to replace what I am currently doing.
I am currently doing the following;
Grab user details by calling https://graph.microsoft.com/beta/users?$expand=manager
NOTE: I am using the beta API as I cannot seem to get the manager data back otherwise.
Iterate over users and then for each user call https://graph.microsoft.com/beta/users/{user_id}/photos
For each photo returned I am then;
Calling https://graph.microsoft.com/beta/users/{user_id}/photos/{photo_id} to get the meta data (e.g. id, height and width). NOTE: I have found the height and width on the initial photos are not always correct whereas they are on this call (so I use this one only for the details)
Calling https://graph.microsoft.com/beta/users/{user_id}/photos/{photo_id}/$value to get the image.
This results in lots of individual API calls on top of the users query;
for every user a call to get array of photos
for every photo one to get the photo metadata
for every photo one to get the photo binary data.
This could quickly get substantial with a large organisation utilizing multiple profile pictures per person for different sizes.
Is there a way of expanding the photo data within the initial users call to avoid having to do all of these additional API calls?
Thanks,
John
Couple of things here John.
Calling GET https://graph.microsoft.com/v1.0/me/photos API is basically a way for you to see what photo sizes are available. You can optimize which size you want, based on your experience - for example if it's a mobile device you might select the smallest (or second smallest size). If it's for a desktop device, you might select the largest size. Requesting GET https://graph.microsoft.com/v1.0/me/photo (note the singular) will get the metadata for the largest available photo. The smaller sizes are scaled versions of the largest photo. In some circumstances, only one size is available. Please see details here. So I'm not sure why you need to get every profile photo size (it's the same photo, but different sizes).
You should take a look at batching to make multiple API calls in one round-trip. Please see this batching topic. NOTE: Batching is still in preview, but will GA soon.
Hope this helps,
How to store the data like time or latitude in video frames per second while capturing the video and retrieve this data back from the saved video in iOS?
Most video formats include video MetaData, describing the whole video at the asset level. When using AVAssetWriter, you can add MetaData items before writing to associate this with the output file. https://developer.apple.com/reference/avfoundation/avassetwriter#//apple_ref/occ/instp/AVAssetWriter/metadata
There are common keys (https://developer.apple.com/reference/avfoundation/1668870-av_foundation_metadata_key_const/1669056-common_metadata_keys) you can use to store the information if you like.
Note this is only at the file level, not per frame.
If you want to store information at a "frames per second" type time refernece then you could build a custom solution, interacting with the buffers "vended" by AVFoundation Recording locations in time domain with AVFoundation It's possible to then write your own custom storage for that information that's synchronised to the video file and you would need to read it back and process it yourself.
I don't believe there's a way to encapsulate that "per frame location info" within the actual video file itself (you could perhaps do a hack and repurpose a subtitles AVAssetTrack and write the info, then pull it off but not display it - this would be unpredictable when video was played on other devices however).
ADDITIONAL INFO
Following on from a comment a year after I wrote this, I did some more investigation. While you could use and abuse the subtitle track like suggested, a better solution is to use the AVAsset metadata type which is specifically for this. https://developer.apple.com/documentation/avfoundation/avmediatype/1390709-subtitle
There are many different AVAssetTrack types which allow you to time data to a point on a video including
Audio
closedCaption
depthData (BETA at time of edit)
metaData <- This is probably what you want
metaDataObject <- In combination with this one too
muxed
text
timecode
video
I want to save data traffic during the time that user is watching dailymotion video. Is anyone having an idea? Thanks in advance
This info is not available in the API. Note that the data used will depend on the video quality (1080px, 720px, 480px, etc), and that may vary from a user to another, depending on their device, bandwidth, etc. It is automatically chosen by the video player but the user can also decide to specify a quality.
You will see all available fields for a video object in the documentation at: https://developer.dailymotion.com/api#video-fields
Just make a getimagesize of the thumbnail_url !
I'm using the UIImagePickerController to select a video from the the device's camera roll. However, I'm not interested in viewing the video at this time; I want to save the URL (in Core Data) so that when the user chooses the name of the video from, for example, a pickerView, the video will load and play at that time.
My understanding (which may be wrong) is the UIImagePickerController makes a compressed copy into the sandbox and provides two different URLS (in the info dictionary). It is kind of a guess at this point, but what I think is:
UIImagePickerControllerMediaURL is the url that points to the original video; and
UIImagePickerControllerReferenceURL is the url that points to the copy.
Here are my questions:
a) Is my assumption correct as to what the two URLs point to, and can I count on the ReferenceURL to point to the selected video so long as it is on the device's camera roll?
and
b) Under the circumstances, is there any way to avoid the compression? From reading on SO, I'm thinking there may not be, but I haven't really seen any posts that really relate exactly to what I'm doing. The structure of my app is such that there could be a lot of these videos and users will not want to get rid of the original, so there is no point in having both the original and compressed version around.
All I'm interested in is a URL I can use to access the video in the camera roll. I will also have to get a thumbnail of it to store with the URL, but I think I see how to do that.
Any help on this will be greatly appreciated.
If you only want the URL to access the video, then you can use UIImagePickerControllerMediaURL this specifies the filesystem URL for the movie (if editing is enabled, this points to the edited/trimmed video).If you want the original video URL you can se UIImagePickerControllerReferenceURL this is the Assets Library URL for the original version of the video. (The truly selected item, without editing). You can, of course, establish controller.allowsEditing = NO to avoid user to edit the video, getting in UIImagePickerControllerMediaURL the URL of the original unedited video.
AFAIK there is no compression applied to the recorded/selected video by default, this only happens if you press the Share button and try to send the file over MMS, MobileMe, etc., just make sure you establish controller.videoQuality = UIImagePickerControllerQualityTypeHigh to get highest quality.
From the header documentation of SPPlaylist for it's image property:
Returns the custom image for the playlist, or nil if the playlist
hasn't loaded yet or it doesn't have a custom image
I have an array of loaded SPPlaylists however the image property on each object is always nil, even though I can see the 4-up image on those same playlists via the Spotify client.
Is there an easy way to obtain that 4-up cover image using CocoaLibSpotify? Or do I have to load all track and album metadata and pull back relevant SPImages individually?
The image of a playlist is for when branded playlists have custom images. This is fairly rare, though.
The reason the grid isn't generated for you is because it's generated locally rather than server-side, so it'd mean loading multiple album's worth of images every time a playlist is loaded, which isn't that memory efficient.
However, there's an open-source Spotify client called Viva built on CocoaLibSpotify (disclosure: written by me) that generates these images. Have a look at the VivaImageExtensions class extension for a reference implementation.
It's worth nothing that the reference implementation there requires that the tracks you pass have had their album cover arts loaded first.