I am building an RTC iOS app client. I am using the google WebRTC iOS library. However, since WebRTC doesn't implement signalling I am searching for an easy way to implement a SIP stack at the signalling layer. I tried Pjsip but it didn't work:
First, I followed this Pjsip guide Integrating Third Party Media Stack into PJSUA-LIB but I didn't know how to proceed, especially that both projects have a big overlap (both implements NAT traversal and SDP). Also, the Pjsip is in C, WebRTC is in Obj-c and the whole app will be in Swift.
Second, I created two separate projects, one for Pjsip and one for WebRTC and both ran successfully on iOS. Then I tried to combine the two projects in one as a first step, but it turned out that both projects use libsrtp but different versions of it, the thing which caused conflicts and compiling errors.
I am not sure if Pjsip is really what I need, especially that WebRTC already has all the features I need except for SIP signalling. I would be thankful if anyone can guide me to how to proceed with Pjsip or give me another open-source easy to use SIP library.
Thanks.
You may want to use (and potentially contribute to) RestComm iOS SDK at https://github.com/Mobicents/restcomm-ios-sdk. It uses Sofia SIP Stack.
I found a nice open source SIP library with small footprint called libre.
I would consider a web sockets signaling.
Take a look here: https://github.com/muaz-khan/WebRTC-Experiment
Related
I am a developer who is still learning . I want to design an app which can allow multiple people to have a video conference/chats simultaneously something like zoom . I know i can design native apps like specific for Android as well as iOS but I am still learning Android development and have no idea about iOS code .I searched and found that we can have hybrid apps having React,Node.js or with Angular.js and they work on different platforms .But as I'm a newbie I need suggestions as well as guidance .what I'm expecting in my app are following things :
Should support all video resolutions and audio quality, should
work in low and high network scenarios
Should be low on usage of power/ processor
Should not have any external hardware dependency
Should work on any device
Should have chat option during conference, even the multi
people conference
Should have sign-in and non-sign-in options to join a
conference
Can be browser and/or app based interface
Should have encrypted network communication
Should have audio/ video recording feature
Should have screen/file sharing capabilities
Should allow audio to close captioning during chat
(multilingual)
Should have capabilities to host multiple concurrent
conferences having multiple participants in each conference
I know its a tedious task to involve everything I discussed but I need guidance how to do this .
I have already told my expectation so now I want to know what steps I need to do so ,How to start as well as where to start ,what language/library I should choose ,whether having a hybrid app be a good idea or should I go for native apps .As I have earlier said I am a learner so I am going to learn each and everything to get my project done ,so whether its react or node or angular or whatever experienced developer are going to suggest/guide here .I know my question may look broad or even vague but still I am asking only because I see stack-overflow as a group of supportive accomplished coders .Hope you guys will help me in getting my project done .Thank you !
OK then you have got much work to do. I will point you to some references which should give you a good start. I will try to keep this as short as possible.
As you mentioned, WebRTC is the way to go.
With WebRTC, you can add real-time communication capabilities to your
application that works on top of an open standard. It supports video,
voice, and generic data to be sent between peers, allowing developers
to build powerful voice- and video communication solutions. The
technology is available on all modern browsers as well as on native
clients for all major platforms.
This blog explains how WebRTC functions in details - https://medium.com/#anto.christo.20/understanding-web-real-time-communication-webrtc-d4cec5a43f2f
This blog explains how to build peer2peer video calling in android -
https://medium.com/#anto.christo.20/understanding-web-real-time-communication-webrtc-d4cec5a43f2f
https://webrtc.org/ also contains lot of headstart material including sample code.
Once you have done this you can add other features on top of it.
Now, this will take care of peer2peer but if you want o build a multi-user functionality from scratch there is some extra work required as mentioned in the answer - how to build multi-user video chatting web app using webRTC, node.js and socket.io
If I wanted to build a real time chat app for iOS using Objective-C, what would be the best way of going about it?
Assuming you've got your server side things setup, you can use Square's Socket Rocket to implement the client side https://github.com/square/SocketRocket
If you're using socket.io at the backend, there are plenty of iOS libraries available for those as well. SIOSocket is one such library.
Maybe I am just lazy, but I do not see a point in building it all from scratch.
There are a plenty of backend providers who will be happy to provide you with ready backend and a library for building your app.
So, you'll just need to connect the solution to your project and make UI according to your needs.
Here are some backend providers you might consider:
ConnectyCube
Firebase
Sendbird
Layer
etc.
They provide different set of features, so I'd recommend checking those they provide first.
This article might be of some help as well.
Some of them like ConnectyCube can provide you also with development services, so you can order UI development according to your mockup design from them too.
As a final school-graduation project I try to develop a kind of spying-car. Which means there is an iPhone placed on a little LEGO-car and an iPad used as a "steering wheel" for the car. Also it is planned to transmit Audio and Video from the iPhone's microphone/camera to the iPad (more than the steering data vice versa).
In the first place the connection from iOS to iOS should be established over a local WiFi network and later - if possible - over 3G (by using the iOS devices network-IP and a DNS server to deal with frequently changing addresses).
My question is: which technology do you recommend using? I read about GameKit, peer-to-peer and so on, but I think these technologies are too abstract for later being able to communicate over 3G. I guess I need to go a little deeper into the low levels of the communication progress. Any suggestion that could bring me a step forward is highly appreciated! (also regarding other parts of my project)
One more thing: Some user suggested using a third party service and to route the sent (video) data over an external server. If possible, I'd rather not use any "middle man". It should just be a basic server-client communication where the iPad is the server and the iPhone the client.
It is kind of an open ended question, but interesting.
First of all, GameKit do have 3g p2p support, see here:
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/GameKit_Guide/Matchmaking/Matchmaking.html
It will handle the peer-to-peer addressing and establisment of the socket. It can also handle voice chat, but I personally never tried this feature so I can't say if it is feasible in your case.
One idea is to leverage existing video-chat services. This will give you a low-latency audio/video channel with peer-to-peer addressing (well, likely using a central server).
Apple's FaceTime is such a service, but there is no public API to it (AFAIK). Same goes for Skype and Google.
There are some paid services that look like they have nice iOS APIs:
http://tokbox.com/platform
http://docs.weemo.com/sdk/ios/
You have to figure out a way to transmit control commands to the peer iPhone, I did not see if the services above had any possibility of sending text messages/arbitrary data.
Tokbox has a free trial so you could try it out and see if it works for you.
I would go for GameKit if this is a hobby project on a budget and there is time for hacking, and probably look into a more high-level API if there is a deadline...
sorry for writing this as an answer, but i don't have enough rep to comment...
i'm working on a similar project and i currently don't have any advice regrading video-streaming, however, from what i read (extensively) i came to the conclusion that i need to use p2p connection between devices for better performance and use socket programming in order to achieve this (although not the most easy to implement choice).
I considered using GameKit which i think will probably answer most of your needs as Krumelur pointed out. but in my case, eventually the app will be cross platform so i had to use low level network programming. you can check out my question here to see the sources i used to try and make the connection between 2 peers, hopefully you'll have better luck than me...
I need to start developing an iOS application which will need to capture and analyze network traffic by connecting to certain devices. It will be replacing a java based desktop application which does the same thing using JPCAP: http://netresearch.ics.uci.edu/kfujii/Jpcap/doc/index.html
From what I can find out, JPCAP uses a C/C++ library to get the job done. The problem I am running into is not knowing how I can use JPCap in my iOS applications since it is java based even though it is based on C/C++ which Objective-C is a superset of.
If anyone can give me any ideas on how I can get this done or if there are any other API's
native to Objective-C, I would greatly appreciate it.
Thanks
Every indication I have, based on my experience in embedded computing is that doing something like this would require expensive equipment to get access to the platform (ICE debuggers, JTAG probes, I2C programmers, etc, etc), but I've always wondered if some ambitious hacker out there has found a way to load native code on a Blackberry device. Anyone?
Edit: I'm aware of the published SDK and it's attendant restrictions. I'm curious if anyone has attempted to get around them, and if so, how far they got.
I've seen this question pop up in a number of different forums over time. The original Blackberries were programmable in C++ but I think that RIM ran up against the problems of trying to implement a secure platform in the C/C++ compile to native paradigm.
The devices do have JTAG ports, but unless one could get hands on the RIM code as a place to start the problem is enormous.
I also have to wonder how useful a Blackberry with a replacement FOSS operating system would be, since it would not likely have the protocols to connect to BES or BIS, send PIN's etc. If one was simply looking for a the power of the hand held computing platform I suspect there are many more likely candidates available.
No, C++ is no longer a supported RIM development tool, as they phased it out a number of years ago. Client applications can be developed in Java (or one of a few 5GL frameworks), and web + sever-side apps can be developed using standard tools.
For those looking for updated information, the new Playbook os, also known as QNX, also known as Blackberry 10 (or it will be when the phones running it come out) is in fact c/c++ based, also using QML and a C++ add on called Cascades.
Unfortunately the official SDK website only seems to mention Java. According to wikipedia, different versions of the BlackBerry use different processors. Combined with the fact that RIM uses a proprietary operating system for the devices, it becomes pretty difficult to develop native code without official tools. There is also a partial API-level security restriction which would further prohibit advanced tinkering.
Just randomly searching for an answer to this and came across http://supportforums.blackberry.com/t5/Tablet-OS-SDK-for-Adobe-AIR/Native-C-C-SDK/td-p/778009 which mentions that BB intend to release a C/C++ SDK soon, more details will be provided at the 2011 Game Developer Conference.