I have a smart home device and am getting it integrated with Google home.
My question is regarding the Local sdk option. Can I skip the cloud integration path and only get a local fulfillment going? By local fulfillment I mean that the users can only say "ok google open the blinds", when they are connected to the same wifi network as the smart device.
TIA!
Yes, you can do that. Local execution will definitely make lifecycle management easier for you and it will be more or less the same as the cloud execution.
First, you need to prepare your cloud instance to support location execution.
You need to configure how SYNC's intent should respond. This will be set in the "Device Config" option. More information can be easily found online.
The Google home or nest device will send the command directly to the local network, bypassing the cloud fulfillment. And hence this will be faster.
Also, to be on the safer side assistant will fallback to your cloud server for fulfillment if command execution is not completed locally.
For more information, this video should help.
Thank you!
Related
I try to build my custom IoT device that will be controlled via Google Home device, and serve people with disabilities.
The device itself is Tiva C Launchpad, that I program from scratch, meaning I will have a full control on it.
In my vision, the user wil say something like: "Ok Google, press play button", and as a result, the Google Home device will send a direct command of press_play_button to the IoT device, preferably via the local network.
I found the Google Action SDK, alongside with the Local SDK extention, but if I understood correctly, I have to be in the app mode first ("OK Google, play {app_name}") before pronouncing the action I want, which is inconvenient.
Is there any way to achieve my requirement?
If not, I may give up on the local network control, and use sort of a webhook to send HTTP request to my smart device, and in that case I wonder if MQTT will be more suitable.
Thanks.
The Local SDK is an extension to the Smart Home API. If your device matches up with the device types and traits that the Smart Home API supports then you can use that to control your device.
It has support for media players so things like play/stop should be possible.
I have build generic Smart Home control using MQTT to reach the device, but you have to provide a HTTP endpoint for the Google System to interface with. This take a little thought as you have to map MQTT asynchronous approach to HTTP's synchronous nature.
With Apple getting ever more stringent by enforcing 2FA on all iOS accounts, a unique challenge that we've encountered is how to setup a system where Apple’s 2FA codes for a shared dev account can be forwarded to perhaps our private slack channel? Hence enabling multiple members on a team access to Apple’s services.
I got so annoyed by this that we then developed a hack.
It’s a Python script that’s running in a terminal on the mac mini CI machine and using some vision SDK tooling, checks the screen every few seconds for the 2FA dialog pop-up.
If it finds it, it then navigates the pop-up, crops the 2FA code, does OCR on it, and relays the code through the Slack API to a dedicated internal buildbot channel that our devs get notified on.
Sample cropped 2FA code screenshot:
The result, is the general shared apple id account is now much more convenient and efficient to access by our devs for CI/CD or any other client App Store info updates.
So that’s one way to do it. 😎
Is there any way to create your own google IOT device based on webhooks and POST-request? Without using firebase, IFTT, node.js
Samples that Google are very poor, they don`t show all steps of creating your own app, they just showing how to deploy "their sample"
I tried to make action with dialogflow & webhook, it was pretty simple. Just processed JSON in POST request to Azure function.
But when I try to create IOT device, its ask me for fulfilment url and it does not even tries to reach that address. I read about action.device.sync, action.device.execute, it just does not communicate with the specified address, giving simulator some voice command doesn`t affect at all. Are there any ways to create IOT device to work with POST-requests & web-hooks?
The answer is it depends.
There are many different ways to do server-device communication: web sockets, local servers, hub/local control, polling, MQTT, and likely many others. All of these solutions have trade-offs, and work in particular circumstances. Depending on exactly what IoT device you want to build, its requirements and technical specs, and what cloud providers you are using, you may identify what works best.
If you run the sample, you'll see it is sending JSON requests to a server and expect JSON responses back. This is must like Dialogflow & a webhook. In this case, the smart home platform communicates solely with the server.
Your server can then communicate with the device in any way that you want. I'm not too familiar with Azure offerings. It might have an MQTT service as well, or some other sort of push notification service you might be able to use.
If you're seeing simulator issues, you may need to make sure your authentication is set up correctly, and you'll need to first complete account linking on your phone before you can use the simulator.
I've developed an electron application and packaged it for windows x32. It's a standalone desktop app, and I want to make sure it doesn't communicate with the outside world. When I launch the compiled application for the first time, I get a prompt message asking me if I'd like to "Allow incoming network connections"
If i say no, I believe the app doesn't run properly as it will be added to my firewall's blacklist. Any advice on what the proper practices for achieving this are?
I want to block any incoming/outgoing traffic to/from my electron app, while ensuring tit runs smoothly.
by allowing incoming connection ,you may get trouble .as you have the Proper firewall to prevent it and you proceed
We have an application that normally communicates with our AWS backend that triggers some Lambdas to process some cloud data and generate some result. This works great however, our application needs to be able to run in an offline mode. I know that AWS Greengrass can let you execute your Lambda code locally for an IoT device. What I want to know is, is it possible to leverage greengrass this way from a Mobile app? Specifically iOS but I would be curious if this would work for android as well. Is there anything that would prevent me from leveraging the AWS greengrass stuff from inside my mobile application?
Thank you!
Casually I worked quite a lot with both iOS and Greengrass but I’ve never heard about a GG daemon that could run on iOS. But even if there was one your app would most likely violate the App Store Review Guidelines, specifically the 2.5.2 point that prohibits downloading source code from external resources except in some special cases. So you wouldn’t be able to publish your app anyway.