I am working on a Ruby on Rails Application. There is a chat feature user can send real time messaging to each other, implemented using action cable.
Now client wants me to integrate the voice recorder like whatsapp.
Voice will record and then should send to the other user. Other user can play that file.
I got a sample in JS https://webaudiodemos.appspot.com/AudioRecorder/index.html
Is there any idea how can I do this in Rails? I am not even getting any reference to send the audio files in RAILS?
Related
I am new to rails and i get a chance to implement voice and video call features to the app but i have to do it with the zegocloud.
i did a lot search for implementation of voice and video call but most of the articles is on twilio.
what should i do or which gem i should pic to integrate it.
I'm trying to implement an app a'la google hangout where you can create a video conference and share your screen when needed using Twilio SDK.
Screen recording should go system-wide so I'm using Replay Kit 2 and the Broadcast Upload Extension.
The problem is how can I upload screen data from the extension to the same Twilio room started in the app?
I'm looking at Twilio example code: https://github.com/twilio/video-quickstart-swift/tree/master/ReplayKitExample
What they do is connect to the room from within the extension.
My flow is different though - I initiate the call from the main app and then I can initiate screen sharing.
I imagine Twilio SDK could generate presigned URL which is shared with the extension and then it uploads data stream there.
And idea if this is achieveable with Twilio?
I am using cordova ios app for live streaming. I am able to stream video from iphone but when i try to join stream it never calls remoteStreamAddedHandler function to display streaming video.
I am using cordova-plugin-iosrtc plugin. It also shows status that "someone has joined room" but not calling that remoteStreamAddedHandler where i can append video tag from. It is working fine in andriod phone.
Thanks
remoteStreamAddedHandler event is fired when a remote user has already joined the room and published his stream.
On apiRTC tutorial : 11-VIDEO CALL STREAMING, user only subscribe to available streams (there is no video publish)
You need to have an user connected with tutorial 10-GROUP CALL, this user will publish his stream and subscribe to remote streams.
The tutorial 12-GROUP CALL - ADVANCED shows you an advanced sample where user can choose to publish/subscribe streams.
I have been checking out the Alexa Skills kit the past few days. I have also been poring through the documentations for both the Skills kit and the Voice Service. I am just having a little hiccup trying to understand the flow. I have implemented one of amazon's sample skills (favourite colour sample) in the developer console and also wrote a sample lambda function to handle the type of response that will be delivered. Its working on the test simulator and what left is basically getting lambda running through my ios app. However I have the impression that I don't have to use the voice service. Am I wrong? I am quite confused, it would be awesome if anybody who has some more clarity could shed some light on the matter. If I get lambda working also, I think it will accept requests that are in a particular format. Where do I have to send the encoded audio to get a json response to send to the skills kit? To the Alexa Voice Service?
Also I am authenticating my app using cognito and dynamo db. If I were to use Alexa Voice Service, then it is mentioned that the user will have to also login to amazon. So do I still have to work with the login with amazon sdk? Or is there a workaround?
Based on Amazon documentation there are two ways to interact with Alexa:
Sounds like you want to implement the app thru the Companion method.
As far as the JSON goes, i am currently resolving that issue now, (will post answer once I have it resolved).
Basically you have to use AVFoundation to capture audio from iPhone and send 2 https messages to Alexa (One message with JSON Body & the second message with audio captured as body.) Bases on Documentation
Companion App
(You have a device (such as a smart speaker) that you want to add Alexa to. So, you build in support for AVS. Great! Now you need a way to authorize it and associate it with the user's account. This is the "companion app" approach. The companion app connects to your smart product and allows the user to login and authorize the speaker to use Alexa and connect to their Amazon account.)
Mobile OR Website
AVS App
(You don't have a device you need to authorize - instead you want to speak to Alexa from within your Android/Iphone application.)
Android or Iphone
You can find a swift example on github on how to implement a iOS AVS client
https://github.com/chintan1891/iOS-Alexa
In my app there is a recording option.
I written code to record the voice by using AVAudioRecorder it will work fine, but my client requirement is it opens the native Voice recorder in ios device that is "Voice Memos". As per my research so many answered that we are unable to access Voice Memos app. I am confused.
Can u please help me there is an any way to access Voice Memos.
Till date it is not allowed to access voice memos recorded by native ios recorder. The best option is to use AVAudioRecorder and let the user record their own memos in your app to upload to the server and access these to show the recorded memos.