I want to make a bot on Mattemost that lets me control Youtube on my smart TV Smart. However, I don't know how to do this without using Google Chromecast.
The idea is to use a command like "!play Never Gonna Give You Up" in Mattermost and the video starts playing on TV.
My initial idea was somehow being able to communicate with TV through some API, but I found no references about it.
Then I thought about using the same protocol as TV to be able to communicate via network, but I didn't find a reference on how to do it and I don't know if it is possible.
Related
I try to build my custom IoT device that will be controlled via Google Home device, and serve people with disabilities.
The device itself is Tiva C Launchpad, that I program from scratch, meaning I will have a full control on it.
In my vision, the user wil say something like: "Ok Google, press play button", and as a result, the Google Home device will send a direct command of press_play_button to the IoT device, preferably via the local network.
I found the Google Action SDK, alongside with the Local SDK extention, but if I understood correctly, I have to be in the app mode first ("OK Google, play {app_name}") before pronouncing the action I want, which is inconvenient.
Is there any way to achieve my requirement?
If not, I may give up on the local network control, and use sort of a webhook to send HTTP request to my smart device, and in that case I wonder if MQTT will be more suitable.
Thanks.
The Local SDK is an extension to the Smart Home API. If your device matches up with the device types and traits that the Smart Home API supports then you can use that to control your device.
It has support for media players so things like play/stop should be possible.
I have build generic Smart Home control using MQTT to reach the device, but you have to provide a HTTP endpoint for the Google System to interface with. This take a little thought as you have to map MQTT asynchronous approach to HTTP's synchronous nature.
I have read the document here https://developers.google.com/cast/ for casting purpose.
Requirement of application is to develop application than allow user to cast video on TV.
My question is as a developer we need to take care of only sender side, right ?
If no than how to create receiver side coding ?
Is there any way to test it without using TV and chromecast device?
Thank you
It might be good if you read some more on cast on the same site under "Guides", that tells you all the components needed.
You don't have to write a receiver if you are happy with the Default/Styled receiver that is provided by Google. Read the link above to see how it can be done.
For testing, you need a real cast device like a Chromecast or Android TV.
I apologise for the possibility of the title of my question would lead to confusion of the problem. For that I will explain my purpose in detail.
We are currently developing our own wifi speaker which is built with MIPS. The speaker comes with an app that will be used to manage it. One of the features that would we would like to include in the app is accessing contents of Spotify and be able to play them on the speakers.
Unfortunately, after going through the iOS SDK Documentation, and did some tests on Web API Console provided by the official of Spotify, I noticed that Spotify does not allow developers to directly get URL of a song, except for preview purposes. I also wasn't able to find any way to get the data bytes of the music streamed from the server. Every content comes with a corresponding URI which is used for a request.
For the device(WiFi Speaker) part, we recently tried to contact Spotify and ask for an SDK that can be used for development. However, one problem is that Spotify told us that they have SDK for x86, and ARMs architecture only. They don't have MIPS.
Now, here are my questions:
Is there any way for me to push music from an app to the WiFi Speakers without having to use SDK (for backend device)?
If Spotify can provide an SDK for our device, then how can we integrate the SDK with our platform?
I'll explain my 2nd question for clarity. Like for instance, in Android and iOS, these are popular platforms and are widely used by mobile devices. So if they provide SDKs for the two OS, then they can use default system frameworks to access the player for playing the content. (In iOS, it's the AVFoundation Framework). However, if Spotify were able to provide the SDK that we need, how would we able to integrate that with our own platform?
I will answer your question no 1:
You should be able to push music from an app using a buffer that you can read from using Core Audio and also forward to a device of your choice. I think what you are looking for can be found at CocoaLibSpotify
Hope anyone can help me
We are developing an Android app that integrates the latest YouTube API Player. Everything seems to be ok when user wants to load a video (using the video Id), and the user is able to visualize it.
However, we have identified in our lab an strange behaviour in the YouTube player when the quality of the network where the user is located gets worse. We thought that the YouTube player would automatically adapt the video quality depending on the network conditions (no quality selection is done, thinking that by default is set to automatic), however it does not, but the native YouTube App does.
Let me describe the test performed where we have observed this behaviour:
An android device with the native YouTube App and our app installed and connected to our WLAN network
The WLAN network provides access to internet and we can inject some impairments on that network (i.e. reduce the bandwith)
Configure excellent bandwith in the nertwork
Start the video and visualize some seconds
Configure bad bandwith (i.e. 256 kbit/s) in the network
After a few seconds:
a. YouTube App stalls just a little, then decrease the quality and continue playing the video.
b. AT4-App stalls longer to continue playing at the initial quality.
We think we already use the Youtube API at the most so we don’t know if we can upgrade our app to behave exactly like native YouTube App, because it does not make any sense that the API has less functionality that the native App (which is supposed to rely on the same APIs).
Thks
I have been using Upnpx library to discover the TV using Upnp protocol.
What I have to do now is to pare my iOS app with the TV as a remote controller.
The first objectives are to take control of the sound volume, move the mouse cursor, browse through web-browsers, etc.
I have tried to google for urn:samsung.com:device:RemoteControlReceiver1 specification but I had a hard time to find useful informations.
Has someone already done this before and could give me directions or the technical specification to control the TV from a remote app ?
Regards,
You could try to use Charles. That way you will be able to sniff your network. Then try to duplicate the functionality in your app.
Just for the people who will face the same issues.
I paired my iOS app to the samsung TV through TCP socket (I used GCDAsyncSocket to handle this) and using the great informations I found here :
http://sc0ty.pl/2012/02/samsung-tv-network-remote-control-protocol/