Jitsi multiple streams handling - stream

I have a Jitsi instance that I would like to share with 50 people for an event. I will mute everyone to make sure that only one or two people are talking at the same time but I wonder if I need to ask people to cut the video too (which would be sad because seeing other people face would be fun).
I know that Jitsi works in peer to peer when only two people are connected but what about a larger room?
Does the server centralise all the video stream?

With more than two participants Jitsi Videobridge will take over (no peer to peer nor H264) and control video streams from clients to server to clients.
As far as I understand it works very much like a turn server.
It strongly depends on your hardware setup. If Jitsi is running on a dedicated machine it should be able to handle this.
On a virtual server on the other hand...

Related

Protocol to find local server out of network

I was thinking about a project the other day and an idea stumbled across my mind:
How would you set up a server, for instance at a restaurant, and have an app of the restaurant chain which finds and directly connects to the locally running server without being in the same network. This should work at higher ranges, so simple Bluetooth is not fitting here.
I've done some research, but I'm not very... good with protocols, and nothing I've found seems to really fit that goal. So I'm wondering what the best way to code this connection would be. Any ideas would be helpful :)
Here are the requirements:
A standard phone should be able to perform the operation.
It should have a fairly high range, think a big store or restaurant.
Server and device do not have to be in the same network.
The connection get's established by the phone on app start, to a server that can do whatever it needs to get this done.

Video transmission over wifi using UDP/packet injection

Hey Stackoverflow community :)
Im looking into making a camera stream video from a an RC device into a computer using wifi.
After considering all of the options I had Im left with two:
use UDP to transfer video in packets
use packet injection and packet sniffing on the receiving device.
I was wondering what are the pros and cons of each method (for that specific purpose of video transmission)?
after looking around I found many implementations for both ways but nowhere have they specified why one is better than the other.
few things that I have not mentioned:
I know UDP does not have error correction which can make the video weird- I dont care about the quality of the video as long as it will be recognizeable.
I dont want to use connection based protocol (TPC, etc)- I dont want to wait for handshake when I get disconnected.
thanks :)
I'm trying to do a similar thing. My take on this is basically when you use the wifi cards in monitor mode (i.e. using packet sniffing/injection) you don't actually need to be connected to that network. Typically, you still need to be connected to an Access point as a client then you can communicate using UDP through that connection. But, in this case, the UDP messages are routed to the Wifi cards and the packets are injected out without being associated with any client. Then, any 'client' just has to sniff or listen on that same channel to get the transmission. So the benefit is not only does UDP not check for lost frames/etc, but also in this case you don't need to be connected to the network to get the packets.
In my case, this is preferable, since basically you will need to connect to the AP in the former case and that would require more capable hardware on the receiver side typically (more range is needed for the association part since you need to send messages back over TCP essentially to get it connected).
FYI here are the links/repos I am using and it also is a reference to what I am talking about
https://docs.px4.io/master/en/tutorials/video_streaming_wifi_broadcast.html
https://github.com/svpcom/wifibroadcast
I am using an off the shelf 'solution' in the short term, the Accsoon Cineye Air, which basically transmits HDMI 300ft line of sight over WiFi. You need an android phone to receive it, and basically I'm using the Vysor application (paid version is $40) to mirror the screen to my desktop. It works, but the latency is still more than I want : 60ms at least from the cineeye, so you can drive it around but its not as quick as DJI which is around 30-40ms ), which is my goal.

Building iOS Native App using WebRTC

I'm searching for 4 days, but can't get it. I built all libraries and integrated it in my custom project, but I don't know what steps should I do to make it work. The only thing that i found with code example\explanation is tech.appear.in/2015/05/25/Getting-started-with-WebRTC-on-iOS , but it is poor and unclear for me, AppRTCDemo source code too. I read about WebRTC for browsers but still can't reproduce it on iOS.
Can anybody explain or provide links to explanation on how to completely build iOS native app using WebRTC API for example p2p ios chat?
Besides the fact that I do not understand code logic provided in demo, I can't understand:
1) What is ICE servers for my iOS app? Should I take care of it? Is it something server side? Should I code and run it myself, or I can use existing Parse background?
2) What is signaling mechanism in iOS app? Is it client side only, or it must be implemented on server side too?
3) And maybe someone can explain step-by-step guide, maybe with some code, how to implement simple iOS p2p chat using WebRTC? For example:
"You have to:
Create ICE/STUN/TURN server on parse core using this =source= and this tutorial =tutorial=.
Create RTCPeerConnection using created ICEServer:
RTCPeerConnectionFactory *pcFactory = [[RTCPeerConnectionFactory alloc] init];
RTCPeerConnection *peerConnection = [pcFactory peerConnectionWithICEServers:kICEServerURL constraints:nil delegate:self];
Create DataChannel using ...
Send signal using ... explained here =link=
Set local and remote descriptions ...
Send Data ... using ...
... " or something similar.
I'm sorry for asking this, but I'm losing my mind trying to figure it out. Thank you!
I am not an expert in webrtc but i will try to explain some of your questions.
1.ICE servers-- NATs and firewalls impose significant problem in setting up IP endpoints. so IETF standards STUN, TURN and ICE were developed to address the NAT traversal problem.
STUN helps connect IP end-points:
discover whether they are behind a NAT/firewall, and if so,
to determine the public IP address and type of the firewall. STUN then uses this information to assist in establishing peer-to-peer IP connectivity.
TURN, which stands for Traversal Using Relay NAT, provides a fallback NAT traversal technique using a media relay server to facilitate media transport between end-points.
ICE is a framework that leverages both STUN and TURN to provide reliable IP set-up and media transport, through a SIP offer/answer model for end-points to exchange multiple candidate IP addresses and ports (such as private addresses and TURN server addresses).
2.Signaling is the process of coordinating communication. This signalling part needs to be implemented by you according to your needs(for ex. if you have sip structure in place then you will have to implement sip signalling). In order for a WebRTC application to set up a 'call', its clients need to exchange information:
Session control messages used to open or close communication.
Error messages.
Media metadata such as codecs and codec settings, bandwidth and media types.
Key data, used to establish secure connections.
Network data, such as a host's IP address and port as seen by the outside world.
Steps
for offerer:
first create the peer connection and pass the ice candidates into it
as parameters.
set event handlers for three events:
onicecandidate-- onicecandidate returns locally generated ICE candidates so you can pass them over other peer(s) i.e. list of ice candidates that are returned by STUN/TURN servers; these ice candidates contains your public ipv4/ipv6 addresses as well as UDP random addresses
onaddstream--onaddstream returns remote stream (microphone and camera of your friend!).
addStream` attaches your local microphone and camera for other peer.
Now create SDP offer by calling setLocalDescription function and set remote SDP by calling setRemoteDescription.
For Answerer:
setRemoteDescription
createAnswer
setLocalDescription
oniceCandidate--On getting locally generated ICE
addiceCandidate--On getting ICE sent by other peer
onaddstream--for remote stream to add
I hope this will make some of your doubts clear.
I came through the process of implementing it few month ago. What I've found was the library was not stable - sometimes it was working sometimes not.
Additionally my iPhone was always becoming hot when I was using it.
I would not suggest using this library and overall WebRTC technology for commercial projects.
This is my implementation, which was working few months ago:
https://github.com/aolszak/WebRTC-iOS
Good luck!

Checking for WebRTC connectivity - reliable methods

I have a live video chat application and I use a TURN server which supports STUN/TURN and both UPD/TCP transmission.
Sometimes users can be connected to the network which blocks that much ports and protocols that WebRTC connection just cannot happen (usually those are corporate networks). I would like to check if a WebRTC connection is possible before users try to connect to each other (actually, perform a technical check).
How can I do it? Ideas I have in my head:
Try to download a hosted chunk of data (audio file, for example) via WebRTC - is it possible and would this be enough to make sure both inbound and outbound connections are open?
Use a TURN server as a host to make a connection to and see if it fails (have no idea if I can do it or not)
Use Flash to try to download/upload a chunk of data over specific ports and protocols. May be even using Cirrus. However, I am not sure this test will be accurate from WebRTC prospective.
Any other ideas?
Additional requirement: the checking technique must support Chrome, Opera and Firefox. Preferably also IE/Safari via Temasys plugin.
Edition 1 - gathering ICE candidates is a good idea, however, it is not 100% reliable. Once I checked logs in my application and it actually gathered relay ICE candidates, but video/audio transmission failed. Tested on Apprtc as well and got same results.
The best way to check is to connect with just a data channel first. Your users won't notice. If that works then audio and video are almost guaranteed to work. As a bonus, you can use the data channel for signaling for super-fast connecting when your users are ready.
the typical WebRTC approach to this is to create a peerconnection with STUN and TURN servers, call createOffer and setLocalDescription and watch the candidates gathered. See e.g. http://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
If you get srflx candidates, your stun server works (i.e. UDP is not blocked). More interesting is whether you get relay candidates. If you do, using TURN as a fallback will work. Quality might suffer if TURN/TCP is used. If you don't get relay candidates... calls are very unlikely to work.

Deliver multicast to several different geo-locations

I need to use one logical PGM based multicast address in application while enable such application "seamlessly" running across several different geo-locations (i.e. think US/Europe/Australia).
Application is quite throughput (several million biz. messages a day) and latency demanding whith a lot of small but very frequently send messages. Classical Atom pub will not work here due some external limits of latencies.
I have come up with several options to connect those datacenters but can’t find the best one.
Options which I have considered are:
1) Forward multicast messages via VPN’s (can VPN handle such big load).
2) Translate all multicast messages to “wrapper messages” and forward them via AMQP.
3) Write specialized in-house gate which tunnels multicast messages via TCP to other two locations.
4) Any other solution
I would prefer option 1 as it does not need additional code writes from devs. but I’m afraid it will not be reliable connection.
Are there any rules to apply for such connectivity?
What the best network configuration with regard to the geographical configuration is for above constrains.
Just wanted to say hello :)
As for the topic, we have not much experience with multicasting over WAN, however, my feeling is that PGM + WAN + high volume of data would lead to retransmission storms. VPN won't make this problem disappear as all the Australian receivers would, when confronted with missing packets, send NACKS to Europe etc.
PGM specification does allow for tree structure of nodes for message delivery, so in theory you could place a single node on the receiving side that would in its turn re-multicast the data locally. However, I am not sure whether this kind of functionality is available with MS implementation of PGM. Optionally, you can place a Cisco router with PGM support on the receiving side that would handle this for you.
In any case, my preference would be to convert the data to TCP stream, pass it over the WAN and then convert it back to PGM on the other side. Some code has to be written, but no nasty surprises are to be expected.
Martin S.
at CohesiveFT we ran into a very similar problem when we designed our "VPN-Cubed" product for connecting multiple clouds up to servers behind our own firewall, in one VPN. We wanted to be able to run apps that talked to each other using multicast, but for example Amazon EC2 does not support multicast for reasons that should be fairly obvious if you consider the potential for network storms across a whole data center. We also wanted to route traffic across a wide area federation of nodes using the internet.
Without going into too much detail, the solution involved combining tunneling with standard routing protocols like BGP, and open technologies for VPNs. We used RabbitMQ AMQP to deliver messages in a pubsub style without needing physical multicast. This means you can fake multicast over wide area subnets, even across domains and firewalls, provided you are in the VPN-Cubed safe harbour. It works because it is a 'network overlay' as described in technical note here: http://blog.elasticserver.com/2008/12/vpn-cubed-technical-overview.html
I don't intend to actually offer you a specific solution, but I do hope this answer gives you confidence to try some of these approaches.
Cheers, alexis

Resources