I was looking for information on http://developer.apple.com/library/ios as well as on https://stackoverflow.com/, but could not find a simple and elegant solution.
I will describe the key problems: it takes to get MP3 file from your media library iPod and increase its volume. On receipt of the file and playing key problems arise.
But the questions - which are not resolved:
How do I change the volume and re-encode MP3 file - so the volume was changed forever? The solution given in
iOS: Create an MP3 on device
Xcode, building and dylibs
Trouble playing mp3s after id3 image edit not strike me as simple and good.
How do I replace the files from your iTunes library to the ones that made my program? The need to force the user to synchronize this device, and manually drag and drop files to the library I razocharovyaet.
Are there any - any comments or suggestions. I would appreciate it.
Re-encoding would cause a decrease of the audio quality. The good news is that you don't need to do this: There is a feature called "Sound Check" built into iTunes that ensures that all your songs are played with the same volume. iTunes scans the songs and stores the volume information inside the ID3 tags. For more information on this, read here: http://en.wikipedia.org/wiki/ReplayGain
This also tells you how to implement it on iOS if you still want to do it.
However, there is no way to sync your changes back to your iTunes library.
I'm find in iTunes Music Library.xml key Volume Adjustment I can boost volume on iTunes via this =). This match more better for me, except encode and realization meshanisms of lame/ffmpeg etc.
2009
Track ID2009
NameStanding On The Shore
ArtistEmpire Of The Sun
Album ArtistEmpire Of The Sun
AlbumWalking On A Dream
Genre
KindАудиофайл MPEG
Size10564118
Total Time263836
Track Number1
Date Modified2011-11-26T19:52:01Z
Date Added2011-09-23T20:59:24Z
Bit Rate320
Sample Rate44100
Volume Adjustment192
Play Count12
Play Date3402675506
Play Date UTC2011-10-28T17:38:26Z
Skip Count2
Skip Date2011-11-06T10:15:51Z
Rating100
Album Rating60
Album Rating Computed
Persistent ID2CF1305AEEF0FCDB
Track TypeFile
Locationfile://localhost/Users/wins/Music/iTunes/iTunes%20Media/Music/Empire%20Of%20The%20Sun/Walking%20On%20A%20Dream/01%20Standing%20On%20The%20Shore.mp3
File Folder Count5
Library Folder Count1
Related
On the 4th gen Apple TV you can select a custom json file URL to load screensaver movies other than the apple ones. For that you have to go into settings, move to about, while on about, click Play/Pause 4 times. This enters a store/channel mode, which allows when selecting channel mode to specify the URL where to fetch the movies from via an intermediary JSON file describing the download URLs. This worked one time for me and one of my own movies was downloaded. But I changed the URL since then, and the Apple TV for several days has not downloaded any of the new movies.
I have both 1080p versions (about 600MB per movie) and 720p versions (about 70MB per movie) available. My version of the JSON file is located here: http://wx.inside.net/sat/ss.json, you can compare this with Apple's own version here: http://a1.phobos.apple.com/us/r1000/000/Features/atv/AutumnResources/videos/entries.json, I think the syntax is OK in mine.
The Apple TV has been on permanently, so would have had ample time to download the new movies, and the movies have been available (and play fine on the iPad, or MBP, using the URLs from the JSON file.
Questions:
- Is there a way to get some feedback from the Apple TV as to whether it has fetched the latest JSON file?
- Can the locally stored / cached movies be erased to force a download of the new movies?
- Are there any size / quality limitations on what movie files it will play?
- Can I somehow force the Apple TV to reload the screensavers URL?
In addition to the above, you can make the Apple TV use your own JSON for the screensavers even when not in channel mode (i.e. in normal operating mode) by "spoofing" its DNS queries (in the whitest hat way possible).
Simply set up a DNS server that authoritatively resolves a1.phobos.apple.com to your own server but forwards any other queries, then set up a web server that answers to that name and replicate the full path to the JSON file, but (obviously) have it point at your own file.
Then set the DNS manually in the TVs network configuration to your DNS. Done!
Problem solved: movie files need to have file name extension .mov (I used .mp4) - irrespective of content format. Way to go Apple...
Hi Balthasar,
I'm reaching out to you for help regarding the possibility of streaming my own videos instead of the Aerial videos as a screensaver in the Apple TV.
As far as I've been doing my research online, I understand that the process is as follows:
With an Apple TV 4th Generation hooked up and running...
Go to Settings > General and then click Play/Pause button 4 times on the Apple TV Remote until you get to the settings to Demo mode. Inside of the Demo Mode settings, you are presented with 3 options; Off, Apple Store, and Channel. If you enable the "Channel" mode, there will be a config URL that you can use to feed info for the screensavers. I don't know how to actually accomplish the last part and was wondering if you could elaborate.
Based on another post on stackoverflow I have downloaded the .json file and understand that I have to my modified .json file on a server and type in the address of where I have it hosted in the URL at the Channel mode in the Apple TV. On the other hand I understand that my own videos have to be .mov in order to work. However, inside of the .json file I have no clue on the following marked on bold:
[
{
"id" : "73F3F654-9EC5-4876-8BF6-474E22029A49",
"assets" : [
{
"url" : "http://a1.phobos.apple.com/us/r1000/000/Features/atv/AutumnResources/videos/comp_GL_G004_C010_v03_6Mbps.mov",
"accessibilityLabel" : "Greenland",
"type" : "video",
"id" : "D388F00A-5A32-4431-A95C-38BF7FF7268D",
"timeOfDay" : "day"
},
––––––––––––
Concrete questions:
1) Can you give examples on to how to configure the server on how to serve that .json file? Where can I host this file on Apache or Windows IIS?
2) On the .json file there are a bunch of video files, but I might not be hosting the same amount, for now, most likely it will be just a couple, perhaps in the future I could host many more. Can I just erase the code and leave only those that I need?
3) What is the "id" on the code, how do I get it for my own videos? Is it needed in order to be fetched by the Apple TV?
4) What about the accessibilityLabel? Is this required? Can I omit it? Delete it? Or simply replace it with my own label?
5) Same thing with the timeOfDay. What am I supposed to use this for?
6) Where should I host the video files at? Can I use any cloud service like Google Drive, Dropbox, OneDrive? If not, then what kind of server should be used.
Please be as specific and descriptive as possible.
Please help me out. I'm sure many more are interested on this feature and would love to learn how to get the most out of their Apple TV. I promise to make a video or a very detailed guide on how to do this "for Dummies" like myself, so that we can spread the word.
Thank you very much in advance.
my goal is to create a sampler instrument for iPhone/iOS.
The Instrument should play back sound files on different pitches/notes and it should have a volume envelope.
A volume envelope means, that the sounds volume is fading in when nit starts to play.
I tried countless way on creating that. The desired way is to use a AVAudioEngine's AVPlayerNode, then process the individual samples of that node in realtime.
Unfortunately I had no success on that approach so far. Could you give me some pointers on how this works in iOS?
Thanks,
Tobias
PS: I did not learn the Core Audio Framework. Maybe it is possible to access an AVAudioNodes Audio Unit to execute this job, but I had not the time to read into the Framework yet.
A more low-level way is to read the audio from the file and process the audio buffers.
You store the ADSR in an array or better, a mathematical function that outputs the envelope value based on the sound index you pass it (using interpolation). So the envelope maps to any sound's duration.
Then you multiply the audio sample with the returned envelope value to get the filtered sample.
One way would be to use the AVAudioNode and link a processing node to it.
I looked at another post of yours I think AUSampler - Controlling the Settings of the AUSampler in Real Time is what you're looking for.
I haven't yet used AVAudioUnitSampler, but I believe it is just a wrapper for the AUSampler. To configure an AUSampler you must first make and export a preset file on your mac using AULab. This file is a plist which contains file references and sampler decay volume pitch cutoff and all of the good stuff that the AUSampler is built for. Then this file is put into your app bundle. You then create a directory named "Sounds", copy of all of the referenced audio samples into that folder and put it in your app bundle as well (as a folder reference). Then you create your audioGraph (or in your case AVAudioEngine) and sampler and load the preset from the preset file in your app bundle. It's kind of a pain. These links I'm providing are what I used to get up and running, but they are a little dated, if I where to start now I would definitely look into the AVAudioUnitSampler first to see if there are easier ways.
To get AULab go to Apple's developer downloads, select "Audio Tools for Xcode". Once downloaded just open the DMG and drag the folder anywhere (I drag it to my Applications folder). Inside is The AULab.
Here is a technical note describing how to load presets, another technical note on how to change parameters (such as attack/decay) in real time, and here is a WWDC Video that walks you through the whole thing including the creation of your preset using AULab.
I'm going to build an iOS application in which you can download audio files and listen to them in offline mode. so far I found out the following process can be done to build the app: creating an xml file with list of audio files on the server and fetch xml with ASIHTTPRequest and parse it into the app (well I still have problem doing this, but my question is something else!)
Before start coding I want to know
what is the right data format for audio files to choose (I know that .mp3, .aac, .m4a, .mp4, .3gp, CAF, MPEG, WAVE, NeXT,... could be used), but I need to know if there is a best format to choose to have better functionality in the app.
is there special consideration that I should take about 3G and wifi connection to download files( I need to know about limitations or techniques for implementing them)
I ask these questions to be able to estimate the time for creating the app regarding techniques that I should use.
I've made an app that plays music using AVAudioPlayer. It either uploads or downloads songs, writes them to Core Data, then recalls them to play when selected. All of the fifteen songs that I've been testing with operate normally using both the iPhone Music Client and my own computer.
However, three of them don't play back on the app. Specifically, I can upload these fifteen songs in any order, clear my Model.sqlite, download them again into the app, and find that three of them just don't play. They do, however, have the right title and artist.
Looking into this, I noticed that the difference is that the non-working files are .m4a. How do I play files of that format with AVAudioPlayer?
EDIT ("Whats "recalling?", what URL do you initialise AVAudioPlayer with?"):
There is a server with songs that the user can access through the app. After choosing which subset S to retrieve, the app then downloads S and writes it to a CoreModel using NSManagedObjectContext. Each song is stored as a separate entity with a unique ID and a relationship to a subset entity (in this case, S).
When I "recall" using the AppDelegate to get the right song using the context, the data is returned as well. I then initialize the AVAudioPlayer like so:
[[AVAudioPlayer alloc] initWithData:(NSData *)[currentSong valueForKey:#"data"] error:nil];
... So I wrote that and then realized that I haven't actually checked out what the error is (silly me). I found that it's OSStatus error 1954115647, which returns as Unsupported File Type. Looking into this a bit more, I found this iPhone: AVAudioPlayer unsupported file type. A solution is presented there as either trimming off bad data in the beginning or initializing from the contents of a URL. Is it possible to find where the data is written to in core model to feed that as the URL?
EDIT: (Compare files. Are they different?)
Yes, they are. I'm grabbing a sample .m4a file from my server, which was uploaded by the app, and comparing it to the one that's in iTunes. What I found is that the file is cut off before offset 229404 (out of 2906191 bytes), which starts 20680001 A0000E21. In the iTunes version, 0028D83B 6D646174 lies before those bytes. Before that is a big block of zeroes preceded by a big block of data preceded by iTunes encoding information. At the very top is more encoding information listing the file as being M4A.
Are you sure your codec is supported in iOS? AVAudioPlayer is ought to play any format that iOS supports, you can read the list of supported formats here :http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/MultimediaPG/UsingAudio/UsingAudio.html#//apple_ref/doc/uid/TP40009767-CH2-SW6 .
I will suggest you to try manually adding those files to your device through iTunes and playing them in iPod, if they won't play then the problem is not your code or sdk, but the format.
How are you recalling them to play - are you writing them to a temporary file which has an m4a extension - this m4a extension is probably required.
This is not a direct solution, but you probably shouldn't be saving the blobs in Core Data directly. Write the files to a cached location and save the file paths in Core Data. This will both use the database more efficiently and give you a local file path to give to your AVAudioPlayer, which will bypass the problem.
It is possible to implement a feature that allows users to watch videos as they are uploaded to server by others. Is html 5 suitable for this task? But flash? Are there any read to go solutions, don't want to reinvent the wheel. The application will be hosted on a dedicated server.
Thanks.
Of course it is possible, the data is there isnt it?
However it will be very hard to implement.
Also I am not so into python and I am not aware of a library or service suiting your requirements, but I can cover the basics of video streaming.
I assume you are talking about video files that are uploaded and not streams. Because, for that, there are obviously thousands of solutions out there...
In the most simple case the video being uploaded is already ready to be served to your clients and has a so called "faststart atom". They are container format specific and there are sometimes a bunch of them. The most common is the moov-atom. It contains a lot of data and is very complex, however in our use case, in a nutshell, it holds the data that enables the client to begin playing the video right away using the data available from the beginning.
You need that if you have progressive download videos (youtube...), meaning where a file is served from a Webserver. You obviously have not downloaded the full file and the player already can start playing.
If the fastastart atom was not present, that would not be possible.
Sometimes it is, but the player for example cannot display a progress bar, because it doesnt know how long the file is.
Having that covered the file could be uploaded. You will need an upload solution that writes the data directly to a buffer or a file. (file will be easier...).
This is almost always the case, for example PHP creates a file in the tmp_dir. You can also specify it if you want to find the video while its being uploaded.
Well, now you can start reading that file byte by byte and print that data to a connection to another client. Just be sure not to go ahead of what has already been recieved and written. You would probaby initiate your upload with a metadata set in memory that holds the current recieved byte position and location of the file.
Anyone who requests the file after the uploaded has started can just recieve the entire file, or if the upload is not yet finished, get it from your application.
You will have to throttle the data delivery or pause it when the data becomes short. This will appear to the client almost as a "slow connection". However you will have to echo some data from time to time to prevent the connection from closing. But if your upload doesnt stall, and why shoud it?, that shouldnt be a problem.
Now if you want to have someting like on the fly transcoding of various input formats into your desired output format, things get interesting.
AFAIK ffmpeg has neat apis which lets you directly deal with datasterams.
Also handbrake is a very good tool, however you would need to take the long road using external executeables.
I am not really aware of your requirements, however if your clients are already tuned in, for example on a red 5 streaming server, feeding data into a stream should also work fine.
Yes, take a look at Qik, http://qik.com/
"Instant Video Sharing ... Videos can be viewed live (right as they are being recorded) or anytime later."
Qik provides developer APIs, including ones like these:
qik.stream.subscribe_public_recent -- Subscribe to the videos (live and recorded)
qik.user.following -- Provides the list of people the user is following
qik.stream.public_info -- Get public information for a specific video
It is most certainly to do this, but it won't be trivial. And no, I don't think that you will find an "out of the box" solution that will require little effort on your behalf.
You say you want to let:
users watch videos as they are uploaded to server by others
Well, this could be interpreted two different ways:
Do you mean that you don't want a user to have to refresh the page before seeing new videos that other users have just finished uploading?
Or do you mean that you want one user to be able to watch a partially uploaded video (aka another user is still in the process of uploading it and right now the server only contains a partial upload of the video)?
Implementing #1 wouldn't be hard at all whatsoever. You would just need an AJAX script to check for newly uploaded videos, and those videos could then be served to the user in whatever way you choose. HTML5 vs. Flash isn't really a consideration here.
The second scenario, on the other hand, would require quite a bit of effort. I am guessing that HTML5 might not be mature enough to handle this type of situation. If you are not looking
to reinvent the wheel and don't have a lot of time to dedicate to this feature than I would say that you would be out of luck. You may be able to use ffmpeg to parse partial video files and feed them to a Flash player, but I would think of this as a large task.