I want to create an action on google assistant to control a smart lightbulb using a google home - google-assistant-sdk

I want to create a new action that is able to understand my voice commands using google home in a foreign language and control my lights. Basically if i say a phrase in a foreign language it should understand to to turn the light on or off. Im starting from scratch and im lost.

The platform overview is the best place to get started with information on how voice commands to the Google Assistant can be translated into commands for your device.
There are several codelabs to walk you through what's happening.

Related

Webhooks triggered by Google Assistant

I noticed that IFTTT.com is using a Google Assistant integration that allows them, basically, to set up for each of their users some kind of "trigger words" that trigger a call to a webhook. I searched a lot in the API docs and found no proper way to do the same, only ways to set up conversations or IoT interactions.
I kind of want to build something similar to the IFTTT integration with a way to programmatically set up actions via an API (not via the dashboard).
Is it possible to do or is this just a custom development Google made for IFTTT?
In my researches I found out about something called "Direct actions" but it does not seem to exist anymore in the Google Assistant Doc. Can you help me with that?
I don't know if my questions are very clear, please tell me if they are not
Thanks in advance for your help
Have a good day
Here's similar options to the IFTTT integration:
Create routines in the Google Home app. That will allow you to create custom commands that activate one or several actions.
Create a smart home action. It's a type of direct action, as opposed to a conversational action, and will let you directly invoke the Assistant for a subset of commands.
You could also create a conversational action. While it would not give you the same direct control, you can still run actions quickly by doing a deeper invocation, ie. "Ask my test app to do an action". It would also give you much greater flexibility over the input.

Predefined User Input with Watson Assistant

I'm trying to build a Watson chatbot (Assistant) that will use pre-defined dialog options instead of the free-flowing text input method, such as this: https://www.socialmediaexaminer.com/wp-content/uploads/2017/01/sh-techcrunch-facebook-messenger.png .
Is there a way to do this, either in "advanced mode" or through the GUI?
If you are deploying your chatbot to a WordPress site, we actually support this functionality out of the box.
Once you install the IBM Watson Assistant plugin and go to its settings page, you'll find detailed instructions in the Advanced tab.
The process is quite simple.
In the JSON editor for your node response, add an array of predefined options on the same level as the text key.
The options will then be displayed as buttons in the chat box, whenever that particular response is issued:
If you are not using our WordPress plugin, it's trickier because your app will have to implement this last part from scratch. However, the basic idea remains the same.
Your app could retrieve the options values from the response and generate the appropriate buttons depending on these values. The WordPress plugin just makes the whole process dead simple.
It's worth noting that this options syntax is a convention we introduced through our WordPress plugin rather than an official specification. It's very likely that the Watson Assistant team will introduce a standard syntax to handle this scenario, in the future.
There is no way to do this specifically through Watson Assistant because you are just building the backend component, not the full application that can use Watson Assistant.
You would have to program the front end that consumes the Watson Assistant API to send the pre-defined dialog options you want to send.
Additionally, you could deploy to Facebook Messenger. It might be able to do this through config on their UI, but I haven't used it. I would recommend editing your question to be specific to Facebook Messenger if you want an answer about the functionality available there.

Xcode 6 - iOS8: Allow Master User To Update Information

I was wondering if anyone could point me to any useful tutorials on allowing a master user to update information for their app. I am looking into creating an application for a local restaurant and I want the owner to be able to update information like the soup of the day and such by themselves.
I have been looking into JSON and CMS for this, but I am unable to find any useful information regarding iOS 8 or xCode 6. If anyone could provide me with this information, or any other suggestions on how to achieve this I would be very grateful!
(I am using Swift not Objective-C)
This is not a code issue, rather it is a development concept issue. You have many choices including making an API that is updated by the restaurant. The app then connects to the API and gets the recipes. If you feel you need to do this via the App make a special username that is allowed access to modify the menu. This can be accomplished via matching username exactly or via using a regex. It all really depends on the structure of your app platform.

Clickable Link to Custom Route from Google Maps web to iOS app

My first question here. I'm hoping I'm doing the tags and such correctly so the right folks might be able to see this. If this question should be placed in other area, please let me know.
I'm trying to create a link to a set of custom driving directions that, when clicked from the native iOS Mail app, will open the Google Maps iOS app and populate the custom directions.
I have a map which has driving directions from Point A to Point B, but I've significantly revised the route using the click and re-position functionality in Google Maps (web).
Using the share function from Google Maps (web) creates a link that does in fact retain the custom route, which can be seen when the link is clicked and it opens in Safari. I don't mind that it opens in Safari, since at this point it prompts you to open up these directions in the Google Maps app. But here's where it gets muddy.
When you click the "use the app" button from Safari, the custom route does not carry over to the app. You are shown default route choices based on Point A and B.
The Google Directions API section on Waypoints and using the 'via:' prefix seems like the best way around this, but I'm not sure how I'd turn that into something clickable from an email.
For reference, here is one of the maps I made with a custom route. Basically I want to have it go from Point A to B along one road. I had to make a handful of points along the route in order to keep the route on the same stretch of road.
Further complicating this is Google's attempt to reroute even this map, based on real-time traffic. I went back to this link after copying it here to find out there's an accident on this road right now and it's re-routing through side streets.
Any help would be much appreciated.
Well, you can open the google maps as it follow as it is documented in the documentation:
comgooglemaps://?saddr=Google,+1600+Amphitheatre+Parkway,+Mountain+View,+CA+94043&daddr=Google+Inc,+345+Spear+Street,+San+Francisco,+CA&center=37.422185,-122.083898&zoom=10
However, there is no mention on how you can add waypoints. And, indeed, you can not even do it in the Maps App. So there is basically nothing we can do...
BTW, here is a blog where included some work arounds, hope this helps a little.
here is what i do : long method, but it works.
open google my maps (instead of google maps) and make your custom route.
in options, export your route to KML/KMZ
at gps vizualizer com, convert your route to the format your app accepts.

How can I send a tn3270 compatible screen to a terminal emulator?

I want to test a terminal emulator I have, but I really don't want to learn Hercules 360, before someone mentions that.
I'm not after creating mainframe applications, what really want to do is learn how to send tn3270 screens for display on a terminal emulator. My reason is simple; I have a set of screens from a customer where their layout, look and feel, etc are all fixed. I want to test my client software against those screens without having to trail all the way out to the customers site to do it in the first instance.
Failing that, does anyone know of a "less intense" method of simulating a tn3270 environment complete with fields, attributes, etc.. ?
I found a reference to an old freeware product called miniFrame by CodeCutter, but the website for it no longer exists and google returns various links that just point back to it.
Thanks

Resources