i am trying to develop sort of internet radio application,i dont have any ideas of how to approach it.I surfed a lot,but i cant identify information to lead me to the right direction.
While surfing i came to know abt, player that they use to access shoutcast broadcast on their BB?
I came to know abt this link
so guys any ideas,how can it be used for developing the application.So Hoping for some results
Taking a close look at the source code for the application you linked to is a great start in understanding how to do streaming audio on the BlackBerry. They implement a custom SourceStream and DataSource and use those to feed audio data from the network into a Player instance.
The best source for information is the BlackBerry Developer website. There, you can find guides such as the Audio and Video Playback guide, which goes over this in great detail including code samples.
Related
I hope you are doing well!
I am working on an eLearning website and came across the topic of the video loading. Since videos are of various sizes, it would be impossible to make the user wait for the entire download of the video for them to start watching, so it must be taken as a stream where the video keeps loading content as the user watches (similar to YouTube I guess). However, I am failing to find how this works? I've been recommended the use of SCORM and xAPI to help with this but I am only finding help on how to upload SCORM files or how to write xAPI code and not how to set them up in our website.
How can we make our videos download as the User watches? Are SCORM and xAPI actually what we should be looking for?
For context, we will be using React JS for our Frontend and will be saving the videos on a server.
I would greatly appreciate any advice you have and thank you for your time!
We tried using xAPI and SCORM however we aren't understanding how they might help
SCORM and xAPI by themselves are not going to assist you with this in general. To stream video via an eLearning course you will need to use a video player (such as the HTML5 video player or video.js) that understands streaming video protocols and to encode the video files in a format supported by that player. I would suggest reading about HLS for instance, though I didn't read the entire page, this is a good place to start: https://www.dacast.com/blog/hls-streaming-protocol/
A traditional eLearning course, such as you would have with SCORM, is going to provide a reasonable way to wrap the playing of video such that it can be launched for a learner via an LMS and may capture data such as completion. xAPI is probably suggested because it provides a more robust way of enabling the capture of interaction data such as when the learner plays, pauses, or seeks in a video. My preferred approach for doing this is to leverage cmi5, and there is an example of xAPI video profile usage within a cmi5 course in the Project CATAPULT sample content, see https://github.com/adlnet/CATAPULT/tree/main/course_examples. It could be adapted to leverage something like HLS and get streaming capability. Confirm with your LMS of choice ahead of time whether it supports cmi5 as adoption is still lower than for SCORM.
SCORM Cloud (a bit of a misnomer, https://cloud.scorm.com/) provides builtin video handling via the cmi5 mechanism and will soon support video streaming beyond just from YouTube without the need to author a course separately.
Currently there is a flutter app and it needs to be protect from screenshot or recording
but when I search about this, there is no way to implement this in a official way
but it seems there are some tricks (like 60fps? I know the concept but I don`t know how to implement this)
you can see the black screen when record the video on Netflix also (they prevent in some ways)
how could I achieve this? thanks
there is a package called window manager which does just the thing that you are asking it restricts external apps from recording as well as does not allow screenshots to be taken
a detailed tutorial is given in this article.
YouTube recently forced everyone to the new YouTube studio for live streaming. Per their documentation here https://support.google.com/youtube/answer/2853812?hl=en the only place to setup Multiple Camera angles is on the Events page. However, since their change I can't access Classic Streaming anymore, it just pop ups a message saying
Live Control Room is the new way to go live.
I've tried reaching out to YouTube directly - I'm stuck in a support loop of uselessness. Has anyone else seen this and found a workaround?
Direct email from YouTube:
I just want to make all a clarification regarding your concern why you can't find to stream with multiple camera angles.
Due because of low usage and since it can't be watched on mobile,
multi-camera won't migrate to Live Control Room right now. Rest
assured we are looking for a better version of this product in the
near future.
We also recommend you to send a feedback to our Product team so they
can look into your suggestion.
Thanks for your understanding on this matter. Let me know if you have
other questions.
I am looking into developing an application to transcribe an audio file for me, then it gives me a document with words or phrases and times spoken, just like YouTube does. I could just upload files to YouTube and then get the transcript but I want to use it offline. Anyone to help? Where can I start?
Not sure about Youtube, but I would start with Google Cloud Speech API, and if you're not happy with it, then I'd go through these 5 as well.
Also, bear in mind that Chrome has Web Speech API built in (and most likely Firefox has something similar, but I never had a need to explore that), so if what you're doing is for web, you should check that out too.
Let us know if this helped.
I'm working on an app that allows a user to select music tracks on their iphone, listen to it and share it with another person live so that the other person can listen to the same song in sync.
i've managed to get the following prototype working: manually add a file to the bundle i'm working with, then decode it using AudioFileReadPackets and sending it over the network using GKSession.
On the receiving end.. I use audio queue/streaming services to read the stream and play the music (ie AudioFileStreamOpen, AudioFileStreamParseBytes, AudioQueueNewOutput, AudioQueueStart etc. See here for more details).
That said, I found out that I can't simply read a file from the iphone's file system and decode it.. rather I gotta use the AVAssetReader and so on. There are many examples of doing that on Stack Over Flow, but they focus on the immediate technical implementation rather than explaining the big picture.. I couldn't find much comprehensive guid or documentation from Apple's developer guide website (see how they describe CMSampleBuffer Reference for example; function parameters have no descriptions etc).
Any links/books/ etc that may lead me in the right direction here? Specifically about accessing audio files in the iPod library and segmenting them using AVAssetReader such that they can be sent in a streaming fashion over a network, to be played via audio queue/streaming services
I found this book to be amazingly helfpul: Learning Core Audio: A Hands-On Guide to Audio Programming for Mac and iOS