I need to develop a Video Player component to consume/play publishing points (On Demand and Live) from Media Services. I use Silverlight 3.
I got a prototype working with SL 3 'Media Element' control. Since the control lacks any generic media player functionality (play/pause/seek etc...) I need to develop on top of it. But my fair guess is this has been already done. The closest I got was SL2VideoPlayer, which has desired media player features, but not working with media services streams. Beside it's based on SL2, not 3.
Can you guys help me with any suggestions? My req are;
1. Support basic video player functionality
2. Support media services streams (live and ondemand)
3. Open Source (So I can improve on it to match my requirement)
Silverlight's MediaElement has the Play and Stop functions and the CurrentState property, which are some of the things you'd need to expose to create your own Video Player. You can easily add buttons to the Silverlight Canvas to call those functions.
You can also register your SL app to be a scriptable object, which would allow interaction from javascript on the HTML page:
System.Windows.Browser.HtmlPage.RegisterScriptableObject("scriptobject", this);
Then just create public functions adorned with the [ScriptableMember] attribute to allow consumption by javascript:
[ScriptableMember]
public void Play()
{
MediaElement.Play();
}
Related
I am a newbie in video streaming and I just build a sample website which plays videos. Here i just give the video file location to the video tag in html5. I just noticed that in youtube the video tag contains the blob url and had a look into this. I found that the video data comes in segments and came across a term called pseudo streaming. Whereas it seems likes the website that i build downloads the whole file and plays the video. I am not trying to do any live streaming, just trying to stream local videos. I thought maybe the way video data is received in segments is done by a video streaming server. I came across RED5 open source streaming server, but most of the examples that is given is for live streaming which I am not experimenting on. Its been few days and I am not sure whether i am on the right track
The segmented approach you refer to is to support Adaptive Bit Rate streaming - ABR.
ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions. See here for an example:
https://stackoverflow.com/a/42365034/334402
For your existing site, so long as your server supports range requests then you probably are not actually downloading the whole video. With Range Requests, the browser or player will request just part of the file at a time so it can start playback before the whole file is downloaded.
For MP4 files, it is worth noting that you need to have the header information, which is contained in a 'block' or 'atom' called MOOV atom, at the start of the file rather than the end - it is at the end for regular MP4 files. There are a number of tools which will allow you move it to the start - e.g.:
http://multimedia.cx/eggs/improving-qt-faststart/
You are definitely on the right track with your investigations - video hosting and streaming is a specialist area so it is generally easier to leverage existing streaming technologies and services rather than to build them your self. Some good places to look to get a feel for open source solutions:
https://gstreamer.freedesktop.org
http://www.videolan.org/vlc/streaming.html
I am making a drum machine and have implemented a recording function using recorderJS library. The problem as you may expect is limited functionality in terms of not been able to edit the recorded clips. So my question is if I was to implement an audio editor that allows the user to trim the clip, how would I go about saving the edited clip back onto the web server?
Is this even possible using Web Audio API?
Many Thanks
The web audio API doesn't do this for you; you need a back end server that can accept uploads. You'll also probably want to re-encode the audio data (as a WAV, MP3, OGG, etc.)
We're currently looking at taking our music visualization software that's been around for many years to an iOS app that plays music via the new iOS Spotify SDK -- check out http://soundspectrum.com to see our visuals such as G-Force and Aeon.
Anyway, we have the demo projects in the Spotify iOS SDK all up and running and things look good, but the major step forward is to get access to the audio pcm so we can sent it into our visual engines, etc.
Could a Spotify dev or someone in the know kindly suggest what possibilities are available to get a hold of the pcm audio? The audio pcm block can be as simple as a circular buffer of a few thousand of the latest samples (that we would use to FFT, etc).
Thanks in advance!
Subclass SPTCoreAudioController and do one of two things:
Override connectOutputBus:ofNode:toInputBus:ofNode:inGraph:error: and use AudioUnitAddRenderNotify() to add a render callback to destinationNode's audio unit. The callback will be called as the output node is rendered and will give you access to the audio as it's leaving for the speakers. Once you've done that, make sure you call super's implementation for the Spotify iOS SDK's audio pipeline to work correctly.
Override attemptToDeliverAudioFrames:ofCount:streamDescription:. This gives you access to the PCM data as it's produced by the library. However, there's some buffering going on in the default pipeline so the data given in this callback might be up to half a second behind what's going out to the speakers, so I'd recommend using suggestion 1 over this. Call super here to continue with the default pipeline.
Once you have your custom audio controller, initialise an SPTAudioStreamingController with it and you should be good to go.
I actually used suggestion 1 to implement iTunes' visualiser API in my Mac OS X Spotify client that was built with CocoaLibSpotify. It's not working 100% smoothly (I think I'm doing something wrong with runloops and stuff), but it drives G-Force and Whitecap pretty well. You can find the project here, and the visualiser stuff is in VivaCoreAudioController.m. The audio controller class in CocoaLibSpotify and that project is essentially the same as the one in the new iOS SDK.
I know there is this &hd=1 code to start a YouTube video in 720p. Is there a code or trick to add at the end of a YouTube video URL to start in 1080p?
Seems to be working again :)
Using an example:
before:
http://www.youtube.com/watch?v=ecsCrOEYl7c
after:
http://www.youtube.com/watch_popup?v=ecsCrOEYl7c&vq=hd1080
note both:
watch_popup
&vq=hd1080
Other possible values can be found here:
https://developers.google.com/youtube/iframe_api_reference#Playback_quality
You can also change the start time of the player by appending this (to 1 minute and 22 seconds in this example):
&t=1m22s
Some documentation can be found here:
https://developers.google.com/youtube/
It's not possible to set the quality to 1080p only with an URL. Some years ago it was possible by adding &fmt=37 but it doesn't work anymore.
However, if you can use JavaScript the YouTube API will allow you to select the quality.
From documentation:
hd (supported players: AS2)
Values: 0 or 1. Default is 0. Setting to 1 enables HD playback by default. This has no effect on the Chromeless Player. This also has no
effect if an HD version of the video is not available. If you enable
this option, keep in mind that users with a slower connection may have
an sub-optimal experience unless they turn off HD. You should ensure
your player is large enough to display the video in its native
resolution.
AS2 player will be retired in October 2012 and the embed codes on YouTube website load AS3 player by default. To show hd1080 you need to use JavaScript API. The functions are described here.
I want to fast-forward and rewind recorded audio in a j2me and Blackberry application.
Is there any sample code available? How do I do it?
As a starting point, read the specification of JSR-135: http://www.jcp.org/en/jsr/detail?id=135
Once you have started a Player object, fast-forward and rewind are done by calling the 3 following methods:
Player.stop();
Player.setMediaTime();
Player.start();
When you need to calculate the value of the parameter you need to pass to setMediaTime(), you will probably need to call
Player.getMediaTime();
Once you get all that, check the blackberry documentation to see if there are any differences between the standard J2ME API and the blackberry APIs in that area.