cross posted from dartisans G+ where I got no response so far:
Q: how to do (async) simultaneous stream downloads.
Hi, I'm still learning Dart and, for training, I'd like to create a web page from which I can fetch datas from 1 to 10 URLs that are HTTP streams of binary data. Once I got a chunk of data from each streams , simultaneously, I then perform a computation and proceed to next chunks and so on, ad lib. I need parallelism because the client has much more network bandwith than the servers.
Also I do not want to fully download each URL they're too big to fit in memory or even on local storage. Actually, It's pretty similar to video streaming but it's not video data it's binary data and instead of displaying data I just do some computation and I do that on many streams at a time.
Can I do that with Dart and how ? do dart:io or dart:async have the classes I can use to do that ? do I need to use "webworkers" to spawn 1 to 10 simultaneous http requests ?
any pointers/advices/similar samples would be greatly appreciated.
tl;dr: how to process a HTTP stream of data chunk by chunk and how to parallelize this to process many streams at the same time.
Related
I'm looking for a way to implement real-time streaming of video (and optionally audio) from iOS device to a browser. In this case iOS device is a server and browser is a client.
Video resolution must be in the range 800x600-1920x1080. Probably the most important criteria is lag that should be less than 500 msec.
I've tried a few approaches so far.
1. HLS
Server: Objective-C, AVFoundation, UIKit, custom HTTP-server implementation
Client: JS, VIDEO tag
Works well. Streams smoothly. The VIDEO tag in the browser handles incoming video steam out of the box. This is great! However, it has lags that are hard to minimize. It feels like this protocol was built for non-interactive video streaming. Something like twitch where a few seconds of lag is fine.
Tried Enabling Low-Latency. A lot of requests. A lot of hassle with the playlist. Let me know if this is the right option and I have to push harder in this direction.
2. Compress every frame into JPEG and send to a browser via WebSockets
Server: Objective-C, AVFoundation, UIKit, custom HTTP-server implementation, WebSockets server
Client: JS, rendering via IMG tag
Works super-fast and super-smooth. Latency is 20-30 msec! However, when I receive a frame in a browser, I have to load it using loading from a Blob field via base64 encoded URL. At the start, all of this works fast and smoothly, but after a while, the browser starts to slow down and lags. Not sure why I haven't investigated too deeply yet. Another issue is that frames compressed as JPEGs are much larger (60-120kb per frame) than MP4 video stream of HLS. This means that more data is pumped through WiFi, and other WiFi consumers are starting to struggle. This approach works but doesn't feel like a perfect solution.
Any ideas or hints (frameworks, protocols, libraries, approaches, e.t.c.) are appreciated!
HLS
… It feels like this protocol was built for non-interactive video streaming …
Yep, that's right. The whole point of HLS was to utilize generic HTTP servers as media streaming infrastructure, rather than using proprietary streaming servers. As you've seen, several tradeoffs are made. The biggest problem is that media is chunked, which naturally causes latency of at least the size of the chunk. In practice, it ends up being the size of a couple chunks.
"Low latency" HLS is a hack to return to the methods we had before HLS, with servers that just stream content from the origin, in a way compatible with all the HLS stuff we have to deal with now.
Compress every frame into JPEG and send to a browser via WebSockets
In this case, you've essentially recreated a video codec, and added the overhead of Web Sockets. Also, with the base64 encoding rather than sending it binary, you're adding extra CPU and memory requirements, as well as ~33% overhead in bandwidth.
If you really wanted to go this route, you could simply use MediaRecorder, an HTTP PUT request, stream the output of the recorder, send it to the server, to relay on to the client over HTTP. The client then just needs a <video> tag referencing some URL on the server, and nothing special to playback. You'll get nice low latency without all the overhead and hassle.
However, don't go that route. Suppose the bandwidth drops out? What if some packets are lost and you need to re-sync? How will you set up communication between each end to continually adjust quality, buffering, codec negotiation, etc.? What if peer-to-peer connections are advantageous?
Use WebRTC
It's a full purpose-built stack for maintaining low latency. Libraries are available for most any stack on most any platform. It works in browsers.
Rather than reinventing all of this, you can take advantage of what's there.
The downside is complexity... it isn't easy to get started with, but well worth it for most low latency use cases.
Using the Google cloud speech API's streaming requires the length of a streaming session to be less than 60 seconds. To handle streaming beyond this limitation, we would require to split the streaming data in to several chunks, using something like single_utterance, for example. When submitting such chunk, we use the "RequestObserver.onCompleted()" to mark the end of streaming session. However it seems like the Grpc threads which were created to handle streaming running even after getting the final results, causing the error "io.grpc.StatusRuntimeException: OUT_OF_RANGE: Exceeded maximum allowed stream duration of 65 seconds".
Is there any other mechanisms that could be used to properly terminate Grpc threads, so that it won't run until the allowed 60 second limit? (Seems like we could use SpeechClient.close() or SpeechClient.shutdown() to release all background resources, but would require recreating the SpeechClient instances again. This would be somewhat heavy.)
Is there any other recommended ways that we could use to stream data beyond the 60 second limit, without splitting the stream to several chunks.
Parameters : [Encoding=LINEAR16,Rate=44100]
Can anybody explain what is the basic concept in mixing in Kurento media server?
As it is mentioned in what kurento provides, there is a term mixing. So, I would like to know what kurento Media server mixes. As,
Do it mix multi stream generated by a user into one stream and broadcast that stream to other receiving user? If it does this how to use this concept
Do kurento able to receive multi-streams through one PeerConnection object with user, i.e., at one WebRtcEndPoint Kurento can receive or send multi stream by mixing those streams into one stream?
Edit Regarding Answer Update
So, I can use mixing concept by using Hubport.
Now, do this HubPort supports different MediaTypes. As, if one user is streaming its screen sharing and at the same time he is streaming its audio also. So, do this composite element mix both the streams to one and stream one single stream to all other users?
The concept of mixing refers to combining several media streams into one. This can be better understood with a conference room. In other setups, every user would have one stream going out, and another coming in for each other participant (except himself). That leaves you with 1 + (n -1) = n streams per participant. This results in n * n streams total, where n is the number of participants.
Mixing all streams in the media server allows you to save bandwidth, ideal in scenarios like mobile devices connected through 3G, for instance. What the mixer does, it combines all the streams into one, so each user is sending one stream, and receiving one stream that has all the combined participant's media (except his own). So just two streams per user saves a lot of bandwidth.
This, however, has a toll on CPU consumption, as it's necessary to adapt the videos to the new resolution, combine them... there is some processing involved.
On the other hand, the concept you are referring to is multicast, which is the ability to send several streams through one WebRTC connection. This doesn't save bandwidth, nor combines all the streams into one, but helps you reduce the number of endpoints present in your deployment. this is in our roadmap, but can't tell you when that'll be.
EDIT
Mixing can be achieved in the media server through the Composite media element. You can check this other SO answer for more info on how to use that media element.
Every tutorial about Apache Flume gives example of "logs getting continuously generated" as the example.
I am curious if Flume works only on text data or it can also work with streaming data like audio, video, electronic sensor inputs ?
Because irrespective of data type it is all bytes array.
It is designed for text data streaming. It is possible to provide a schema definition for text data, so that the consumer of the data can process it after receiving it. Consumer of data can scale horizontally with the increasing size of data, still making use of commodity hardware(moderate cores/ RAM). However for binary data the reconstruction and parsing would be a heavily resource intensive operation.
I have a question, my app is a short video share application just like vine, but now I encounter questions when used in subway or some places with weak signals, it will fail sometimes and have poor user experience.
I am a newbie for network programming and iOS. I did a lot search on Google, and have some general sense, let me sum up my finds and pls help to give some suggestions for it.
My requirement is:1. support resume when uploading interrupt. 2. can success upload in weak signal. Actually I do NOT need to think about the realtime problems or how to compress the video, just think the video as a file is totally ok. BTW the server is a REST style, I use post to upload datas.
Questions:
which is the better way for my requirement, using stream(stream NOT mean live stream video just data stream like NSOutStream&NSInputStream, just play the video after all of it has uploaded, NOT the live stream video playing and downloading at meantime) or divide the whole file into several chunks and upload chunk by chunk.
someone said, using stream is good for resource efficiency since the stream will read files into memory and control the size of the buffer and after setup connection with server we use delegate to control the failure so easy to use.
Upload chunk by chunk is good at speed, I have puzzled with this statement, upload by chunks after successfully upload one chunk we need to release the connection resources and setup another connection then do upload I think this will spend time to do these preparation stuffs.
If upload by chunks which size should be good, one video file is almost 1M bytes, someone said 8k is a safe choice, but......
since the app needs to adapt to different signal strength, is there any way? for example the chunk size is depended on the bandwidth or other ways
Is there any private API already support resume uploading interrupt or is there any apple api can support this, my app needs to run on iOS 5 and above so can NOT use NSURLSession
Concurrent uploading is a way to speed up? If so how to implement or any API available?
Thank you in advance for helping a newbie like me. Thank you very much.
It takes o lot of topics your question. iOS doesn't have an public API to stream video (such as the face time components). The main issue here is sending frame by frame will require a lot of network traffic, instead if you use the normal video writer you get hardware compression, that will be a lot better. There's more and you can check here: Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone), Upload live streaming video from iPhone like Ustream or Qik, How send to stream video from iOS device to server? and here
If real time is not your problem I would suggest you just to use a good network manager such as: MKNetworkkit or AFNetworking 2.0 . They will take care of most of the aspect that you asked.