I have been checking out the Alexa Skills kit the past few days. I have also been poring through the documentations for both the Skills kit and the Voice Service. I am just having a little hiccup trying to understand the flow. I have implemented one of amazon's sample skills (favourite colour sample) in the developer console and also wrote a sample lambda function to handle the type of response that will be delivered. Its working on the test simulator and what left is basically getting lambda running through my ios app. However I have the impression that I don't have to use the voice service. Am I wrong? I am quite confused, it would be awesome if anybody who has some more clarity could shed some light on the matter. If I get lambda working also, I think it will accept requests that are in a particular format. Where do I have to send the encoded audio to get a json response to send to the skills kit? To the Alexa Voice Service?
Also I am authenticating my app using cognito and dynamo db. If I were to use Alexa Voice Service, then it is mentioned that the user will have to also login to amazon. So do I still have to work with the login with amazon sdk? Or is there a workaround?
Based on Amazon documentation there are two ways to interact with Alexa:
Sounds like you want to implement the app thru the Companion method.
As far as the JSON goes, i am currently resolving that issue now, (will post answer once I have it resolved).
Basically you have to use AVFoundation to capture audio from iPhone and send 2 https messages to Alexa (One message with JSON Body & the second message with audio captured as body.) Bases on Documentation
Companion App
(You have a device (such as a smart speaker) that you want to add Alexa to. So, you build in support for AVS. Great! Now you need a way to authorize it and associate it with the user's account. This is the "companion app" approach. The companion app connects to your smart product and allows the user to login and authorize the speaker to use Alexa and connect to their Amazon account.)
Mobile OR Website
AVS App
(You don't have a device you need to authorize - instead you want to speak to Alexa from within your Android/Iphone application.)
Android or Iphone
You can find a swift example on github on how to implement a iOS AVS client
https://github.com/chintan1891/iOS-Alexa
Related
I have a bot that leverages all the cool tech that comes with botframework, e.g. LUIS, QnA maker, adaptivecards, etc. The bot works well and I can use WebChat to connect to and ask questions and get responses. However, I now need native iOS (and eventually Android) app that can perform much like webchat does but I do not want to embed webchat in a web control in native app. I plan to have voice always on leveraging something like snowboy or picovoice for hotword to wake app and send commands to bot - users would ask things like "hey bot what is weather in Boston" and get presented with result message or adaptivecard.
Is voice steaming to directline API from Swift on iOS possible (I know most things are possible so any pointers would be greatly appreciated)? Or am I approaching from wrong angle and perhaps there is better/easier way to achieve my goal?
Using any of the OpenTok client SDKs, is it possible to call from one client to an other client, and make it look like similar to a "real" phone call?
I understand that a user X and user Y can join the same "room" if they both know the name of the room. But I don't understand how user X can send signals, to notify user Y to join a specific room, how is this done? I want it to work cross platform, i.e, work on iOS, Android devices, and web pages. My use cases are:
- app to browser
- browser to app
- app to app
- browser to browser
Is it possible in all of my use cases? Which are possible?
Is it possible to use OpenTok in a mobile app to show an incoming call even though the app is in the background (like how facebook messager and whats app works)
I've gone trough the tutorials on Tokbox website and successfully got them working, but I can't find a way to let different users notify each other to join a session (neither in a peer-to-peer way, or via a server). How should this be done?
I cannot find that this functionality is provided by another player such a Twilio either.
Thanks in advance,
Let's OpenTok do its job. In other words, use OpenTok to actually start a video session. All these things that you talked can be handled WITHOUT OpenTok, like REST APIs or Websocket or whatever.
I used to work in a similar project. Have a server to coordinate everything (all clients connected, who call who, push notifications, etc).
Whenever A needs to call B, the SERVER will start a "room" and put A and B to talk to each other...
So, don't mix the things. Let your server orchestrate everything and use OpenTok for video. It's designed for this purpose.
Is it possible to build an iOS app that streams from spotify in a way where it won't need the user to login to Spotify? In a way where only our application is registred.
It is just not clear after scanning through the SDK / API section.
Disclaimer: I didn't work with the Spotify SDK
It should be possible. I suppose you can hardcode the login info in the code directly, and have the app login to spotify with the same user account every time (the one hardcoded in the app).
On the other hand, I don't see a good reason why you would want all the users of your app to connect as the same user to Spotify. But that is your decision :).
Also, I think you should also check the Spotify terms and conditions before you do that. Not sure if that's an issue or not.
Im very new to the backend. Im making a plan to build an iOS Social App that uses Parse for the backend. But since Parse hasn't supported real-time chat feature so I have an idea that uses the Quickblox chat API to build my real-time chat feature. Im not sure whether it is possible or not so I ask here.
Honestly I haven't try anything because I just make my plan. It must be clear before I start so I will build my app faster.
Yes, It's possible.
QuickBlox user model has external_user_id field, you can connect Parse user and QB through this parameter. Or vise-a-versa.
Next, for example, you use Parse user. And you would like to start chat with other Parse user.
You should:
Get QuickBlox user by Parse user.
Login to QuickBlox chat
Send message.
http://quickblox.com/developers/SimpleSample-users-ios#Integrate_QuickBlox_with_your_existing_User_Base
Also, QuickBlox has the same features wich has Parse. I recommend you to review your requirements and maybe you no need to use Parse because in any cases use 2 platforms it will be a bit complicated to support.
It's not immediately clear from their site, but can one build a one to many streaming app like Ustream using TokBox web/iOS APIs? Is it limited to just chat.
I'd want to make something that captures video from the desktop or ios device and makes that video accessible at some public URL.
Yes that is capable. OpenTok works with Publishers and Subscribers. A single Publisher can publish to a session then many users can subscribe to the publishers stream using a public access token.
Check out the API it shows what you can do with it, how it does it and even shows some good demos. There are many examples of how they have used it for online chat shows and other similar one-to-many applications.
They've also very recently released their WebRTC version which makes it definitely worth a look!