How to use boost websocket to implement sub-protocol - boost-beast

I want to implement a websocket sub-protocol.
For example, I have a weboscket server as ws://localhost:1234, now I need one more sub-protocol as ws://localhost:1234/sub.
I know lib-websockets provides this function, but I haven't found it in boost websocket.
Is there any way to achieve this?

Related

VoIP integration / architecture for a WEB app (Rails)

I want to understand how can I integrate VoIP for my WEB app which in my case would be a Rails app.
What I wan't to achieve is sending socket events to the front-end for each call state:
call ringing
call started
call ended
The implementation is already done but I'm not convinced if is the right architecture and the informations I found until now over the internet are poor.
I don't think that makes sense to explain how is currently done (but if needed I can provide), but starting from ruby-asterisk gem which can be used to retrieve data about an extension number what would be the correct architecture in order to retrieve continuously events from call states and send them as socket events to the WEB?
How can you determine if the call is ended?
On the overall implementation, do you see any use of redis for saving previous states of a call and then to determine the new states?
Main issue is : asterisk is PBX
Again: it is small office PBX, not all-in-one platform with API.
So correct architecture for high load is centralized hi-perfomance socket server, which support auth, response on your api calls(if any), event notification etc etc. After that you have use AMI+ dialplan to notify you server about actions on PBX.
You web app should connect to thoose server, not directly to asterisk. Only ONE connection to asterisk recommended for peformance considerations.
If you have low load - doesn't matter what you do, it likly will work ok.
Asterisk not support redis, so use of that unlikly. Use CDRs for end event.

Node.js server sending info to ios app

I am new to backend programming but would like to try and put together the backend of an app I am building. I essentially am looking to implement an observer type programming pattern, just between the server and ios app. For instance two different app users may subscribe to different things on the node.js server and would get different json sent to the swift of the app to use. I am unsure however how to go about trying to subscribe a user to the node.js using swift, and then essentially setting a listener for the json response as they come in. I would appreciate any help on this sort of server programming pattern type work if anyone has any references or thoughts.
Well, nodejs has the Event Emitter, that can be an implementation of observer pattern.
In your case, you can use a special Event Emitter based class, the Net module, that is a event emitter to handle tcp connections. As TCP connections are bidirectional by default, you can handle events in both sides using it.
So, you could handle the calls from the swift app (or other languages). It's just open the socket with server and send/receive data.

How to create an ONVIF compliant interface using PiCam without third party codes?

I was looking through a lot of questions and answers about ONVIF compliant solutions for raspberry pi, but most of them provide incomplete solutions to my problem. I am looking for a way to make the Raspberry Pi Camera (Not an USB camera) into an ONVIF compliant camera which streams out in RTSP when needed.
Here is a similar question on how to implement ONVIF compliant interface that I've already tried to answer, but since in this case I guess you just want the video stream and using gSoap or anything would violate the third party software limitation, here is what I would do:
1) For an ONVIF client to be able to retrieve your stream URL, you need to implement a response to the GetStreamUri ONVIF request - through that you can return the actual RTSP link and if you have some sort of RTSP server running (I guess live555 is the most common choice these days) then the client will continue connecting to the RTSP server and will get the stream. I assume there are some out-of-the-box RTSP streaming solutions for Raspberry Pi so I won't talk about how to setup live555.
2) For the GetStreamUri to work, the client needs to know a profile token for this specific stream. This is why you need to implement the GetProfiles request. I assume since you are writing a solution for an exact use case, just predefine the xml response to this request and with a single profile and a hardcoded token. The GetProfiles request itself does not depend on any input parameters so you should be safe here. The ONVIF client may as well send some authorization headers, but you might for start just as well ignore them and implement authorization when the actual streaming is running. The ONVIF client may as well send a lot of other requests like capabilities etc., but you could ignore those for now. One important request is GetSystemDateAndTime - ONVIF clients query this since the timestamp is used in the authorization token.
What I would do is create a simple C/C++ web server with predefined xml answers for these 3 requests, take a general ONVIF client like ODM (Onvif Device Manager) and try to connect to the camera from it and see what happens, if any other requests are issued that block the flow or not. My guess is that it could actually work with these three in the following order: GetSystemDateAndTime -> GetProfiles -> GetStreamUri. I could be wrong and the client might query something else and not receiving an answer could make it stop from taking any further actions, but I am not sure about this.

Implementing django-socketio iOS client

I want to make a chat app using Django in iOS. The server-side socket communication method that I've chosen is django-socketio because it integrates well with django. So my problem is selecting a way to implement the client side on iOS. All the django-socketio client examples are in javascript e.g-
To subscribe to a channel client-side in JavaScript use the socket.subscribe method:
var socket = new io.Socket();
socket.connect();
socket.on('connect', function()
{
socket.subscribe('my channel');
});
I want to know how to implement such a code in my iOS client, as in how to implement the "subscribe()" channel function from it, and how to implement interactivity from iOS to the various other events defined by the django-socketio server like:
#on_connect
def my_message_handler(request, socket, context):
...
and #on_message, #on_subscribe, etc.
I'm currently trying to use NSStream and CFStream as shown here, but it's proving difficult for me to convert it in a way to make it talk with django-socketio server.
(Note: for all those who saw the last "here" link, yea I did go the way of using twisted first instead of django-socketio, but it doesn't have any well-defined concrete method of integration with django (Yes, I tried searching it everywhere). Maybe it will be my next question here.
https://github.com/pkyeck/socket.IO-objc
PS: Now it doesn't support socketio protocol 1.0, neither django-socketio.

Any Ruby AMF clients out there?

I'm looking for a way to push/receive AMF0 / AMF3 messages in Ruby(Rails).
From what I read rubyAMF can only act as a server.
What I need is a library that allows client access to FMS/Wowza.Any ideas?
As the developer of RocketAMF http://github.com/warhammerkid/rocket-amf, I don't know of any AMF libraries that can act as clients out of the box. However, if you're interested in it, it shouldn't be that difficult to reverse the server code in RocketAMF to work as a client. You would just write a serializer for RocketAMF::Request that uses the standard message calling style (#<RocketAMF::Request:0x10167b658 #headers=[], #messages=[#<RocketAMF::Message:0x10167ae88 #response_uri="/1", #data=["session stirng", 42.0], #target_uri="App.helloWorld">], #amf_version=3>). Then you would write a deserializer for RocketAMF::Response.
I'll try to put together a new RocketAMF build in the next couple days that can communicate with FMS, but it's not a guarantee.

Resources