My question is simple...
I have an array of bytes or actually a data of my video, and I want to play it.
Every library that I use, saves the Data & gets the path of saved Data as URL.
But I don't want to save the Data...
Kindly help me what to do to play a video using bytes or Data, without saving it.
but I don't want to save the Data
Yes, you do. You never want to play a video from data held in memory; it's way too big and will crash your app. And in fact the system isn't even set up to play video out of memory! It's set up to play video out of files.
So, as you obtain the data you save it, and in order to play it, you supply the file URL where you saved it. That's the right way.
Related
How to store the data like time or latitude in video frames per second while capturing the video and retrieve this data back from the saved video in iOS?
Most video formats include video MetaData, describing the whole video at the asset level. When using AVAssetWriter, you can add MetaData items before writing to associate this with the output file. https://developer.apple.com/reference/avfoundation/avassetwriter#//apple_ref/occ/instp/AVAssetWriter/metadata
There are common keys (https://developer.apple.com/reference/avfoundation/1668870-av_foundation_metadata_key_const/1669056-common_metadata_keys) you can use to store the information if you like.
Note this is only at the file level, not per frame.
If you want to store information at a "frames per second" type time refernece then you could build a custom solution, interacting with the buffers "vended" by AVFoundation Recording locations in time domain with AVFoundation It's possible to then write your own custom storage for that information that's synchronised to the video file and you would need to read it back and process it yourself.
I don't believe there's a way to encapsulate that "per frame location info" within the actual video file itself (you could perhaps do a hack and repurpose a subtitles AVAssetTrack and write the info, then pull it off but not display it - this would be unpredictable when video was played on other devices however).
ADDITIONAL INFO
Following on from a comment a year after I wrote this, I did some more investigation. While you could use and abuse the subtitle track like suggested, a better solution is to use the AVAsset metadata type which is specifically for this. https://developer.apple.com/documentation/avfoundation/avmediatype/1390709-subtitle
There are many different AVAssetTrack types which allow you to time data to a point on a video including
Audio
closedCaption
depthData (BETA at time of edit)
metaData <- This is probably what you want
metaDataObject <- In combination with this one too
muxed
text
timecode
video
I'm working on a mobile application that can perform basic analysis on audio input from the microphone in real time. However, the usual way to do it using the 'AVAudioRecorder` as shown in this guide and the API requires you to save it to a file first.
Since the app is meant to stay on for a long time and be used multiple times a day, I want to avoid clutter the phone with many audio files or audio files that are too big. However, I can't seem to find the way around it. Searching for solutions on the internet always lead to solutions of how to save an audio to a file, instead of avoiding saving to a file and work with some kind of buffer.
Any pointers would be super helpful!
Both the iOS Audio Unit and the Audio Queue APIs allow one to process short buffers of audio input in real-time without saving to a file.
You can also use a tap on the AVAudioEngine. See Apple's documentation: https://developer.apple.com/library/ios/samplecode/AVAEMixerSample/Introduction/Intro.html
You can use /dev/null as path in the AVAudioRecorder instance. This way it will not save to a file, but just discard the data.
var url = NSUrl.FromString("/dev/null");
var recorder = new AVAudioRecorder(url, settings, out error);
I'm using the UIImagePickerController to select a video from the the device's camera roll. However, I'm not interested in viewing the video at this time; I want to save the URL (in Core Data) so that when the user chooses the name of the video from, for example, a pickerView, the video will load and play at that time.
My understanding (which may be wrong) is the UIImagePickerController makes a compressed copy into the sandbox and provides two different URLS (in the info dictionary). It is kind of a guess at this point, but what I think is:
UIImagePickerControllerMediaURL is the url that points to the original video; and
UIImagePickerControllerReferenceURL is the url that points to the copy.
Here are my questions:
a) Is my assumption correct as to what the two URLs point to, and can I count on the ReferenceURL to point to the selected video so long as it is on the device's camera roll?
and
b) Under the circumstances, is there any way to avoid the compression? From reading on SO, I'm thinking there may not be, but I haven't really seen any posts that really relate exactly to what I'm doing. The structure of my app is such that there could be a lot of these videos and users will not want to get rid of the original, so there is no point in having both the original and compressed version around.
All I'm interested in is a URL I can use to access the video in the camera roll. I will also have to get a thumbnail of it to store with the URL, but I think I see how to do that.
Any help on this will be greatly appreciated.
If you only want the URL to access the video, then you can use UIImagePickerControllerMediaURL this specifies the filesystem URL for the movie (if editing is enabled, this points to the edited/trimmed video).If you want the original video URL you can se UIImagePickerControllerReferenceURL this is the Assets Library URL for the original version of the video. (The truly selected item, without editing). You can, of course, establish controller.allowsEditing = NO to avoid user to edit the video, getting in UIImagePickerControllerMediaURL the URL of the original unedited video.
AFAIK there is no compression applied to the recorded/selected video by default, this only happens if you press the Share button and try to send the file over MMS, MobileMe, etc., just make sure you establish controller.videoQuality = UIImagePickerControllerQualityTypeHigh to get highest quality.
I've made an app that plays music using AVAudioPlayer. It either uploads or downloads songs, writes them to Core Data, then recalls them to play when selected. All of the fifteen songs that I've been testing with operate normally using both the iPhone Music Client and my own computer.
However, three of them don't play back on the app. Specifically, I can upload these fifteen songs in any order, clear my Model.sqlite, download them again into the app, and find that three of them just don't play. They do, however, have the right title and artist.
Looking into this, I noticed that the difference is that the non-working files are .m4a. How do I play files of that format with AVAudioPlayer?
EDIT ("Whats "recalling?", what URL do you initialise AVAudioPlayer with?"):
There is a server with songs that the user can access through the app. After choosing which subset S to retrieve, the app then downloads S and writes it to a CoreModel using NSManagedObjectContext. Each song is stored as a separate entity with a unique ID and a relationship to a subset entity (in this case, S).
When I "recall" using the AppDelegate to get the right song using the context, the data is returned as well. I then initialize the AVAudioPlayer like so:
[[AVAudioPlayer alloc] initWithData:(NSData *)[currentSong valueForKey:#"data"] error:nil];
... So I wrote that and then realized that I haven't actually checked out what the error is (silly me). I found that it's OSStatus error 1954115647, which returns as Unsupported File Type. Looking into this a bit more, I found this iPhone: AVAudioPlayer unsupported file type. A solution is presented there as either trimming off bad data in the beginning or initializing from the contents of a URL. Is it possible to find where the data is written to in core model to feed that as the URL?
EDIT: (Compare files. Are they different?)
Yes, they are. I'm grabbing a sample .m4a file from my server, which was uploaded by the app, and comparing it to the one that's in iTunes. What I found is that the file is cut off before offset 229404 (out of 2906191 bytes), which starts 20680001 A0000E21. In the iTunes version, 0028D83B 6D646174 lies before those bytes. Before that is a big block of zeroes preceded by a big block of data preceded by iTunes encoding information. At the very top is more encoding information listing the file as being M4A.
Are you sure your codec is supported in iOS? AVAudioPlayer is ought to play any format that iOS supports, you can read the list of supported formats here :http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/MultimediaPG/UsingAudio/UsingAudio.html#//apple_ref/doc/uid/TP40009767-CH2-SW6 .
I will suggest you to try manually adding those files to your device through iTunes and playing them in iPod, if they won't play then the problem is not your code or sdk, but the format.
How are you recalling them to play - are you writing them to a temporary file which has an m4a extension - this m4a extension is probably required.
This is not a direct solution, but you probably shouldn't be saving the blobs in Core Data directly. Write the files to a cached location and save the file paths in Core Data. This will both use the database more efficiently and give you a local file path to give to your AVAudioPlayer, which will bypass the problem.
I've got experience with building iOS apps but don't have experience with video. I want to build an iPhone app that streams real time video to a server. Once on the server I will deliver that video to consumers in real time.
I've read quite a bit of material. Can someone let me know if the following is correct and fill in the blanks for me.
To record video on the iPhone I should use the AVFoundation classes. When using the AVCaptureSession the delegate method captureOutput:didOutputSampleBuffer::fromConnection I can get access to each frame of video. Now that I have the video frame I need to encode the frame
I know that the Foundation classes only offer H264 encoding via AVAssetWriter and not via a class that easily supports streaming to a web server. Therefore, I am left with writing the video to a file.
I've read other posts that say they can use two AssetWritters to write 10 second blocks then NSStream those 10 second blocks to the server. Can someone explain how to code the use of two AVAssetWriters working together to achieve this. If anyone has code could they please share.
You are correct that the only way to use the hardware encoders on the iPhone is by using the AVAssetWriter class to write the encoded video to a file. Unfortunately the AVAssetWriter does not write the moov atom to the file (which is required to decode the encoded video) until the file is closed.
Thus one way to stream the encoded video to a server would be to write 10 second blocks of video to a file, close it, and send that file to the server. I have read that this method can be used with no gaps in playback caused by the closing and opening of files, though I have not attempted this myself.
I found another way to stream video here.
This example opens 2 AVAssetWriters. Then on the first frame it writes to two files but immediately closes one of the files so the moov atom gets written. Then with the moov atom data the second file can be used as a pipe to get a stream of encoded video data. This example only works for sending video data but it is very clean and easy to understand code that helped me figure out how to deal with many issues with video on the iPhone.