I have design a small text based chat system with signalr but I would like to provide a feature to send file from one user end to other user end. Is it possible by WebRTC ?
I got a sample code from this url, which is probably uploading file from client to server. This web site has some code for webRTC.
However, I am looking for code which will enable two users to send file to each other with real time file upload progress. So if anyone knows any website which will guide me to do it then please redirect me or discuss with code sample which I need to send file between two users.
Thanks.
Related
i am currently working on a real-time chat app in Unity
and i found those platforms to work with,
Firebase : Can we send Videos efficiently ??
MatriX : https://www.ag-software.net/matrix-xmpp-sdk/
but i am not sure that we can send videos with MatriX ?
i wanted to know from your experience
what is the best way to make real-time chatting (support photos and videos sending) in Unity ?
thanks in advance
You need to find or create services where your clients can connect and:
upload files (photos, videos .etc) and get an public and downloadable URL.
send messages to other connected clients that apart from the string, also contain media metadata (.eg a list of file attachments which are actually URLs uploaded at service (1))
Now, if you cannot find a single service that supports those two then you could try to find two different ones.
here is an example of a chat console application in C#. It contains a web service and client library that is used by the console app. Instead of a console app, it could be used in a Unity app. It does not support file uploading but it can send messages between clients over web sockets.
If you were to create something yourself, instead of finding a 3rd party service, I would recommend node.js/express and socket.io for a server since its quite beginner friendly.
here is a C# client library that can listen to socket.io events from the server. It must be the same that is used in the console application I shared above.
Basically what I'm trying to achieve is:
-whenever a new video is posted on my channel, trigger a zap/ifttt to download it to dropbox in mp4 for backup purpose, added bonus - extract audio to mp3.
I want to do it automatically and on a free remote service, not my PC or VPS. I know it all this could easily be done locally, but I want an independent solution for a number of reasons.
The problem is, youtube api prohibits video download.
So far I have investigated web-based downloaders, but couldn't figure a way to automatically get a download link without visiting the website. cloudconvert doesn't support direct youtube download.
The closest thing I found is a web-fork of youtube-dl that allows it to run on owncloud, but I'm failing to find a free owncloud provider that allows user apps.
There should not be more than 3 short channel uploads a day, so performance and delays are not much of an issue, I'm happy to wait up to a day for the download to commence.
Any help much appreciated.
One step of the process is probably using offcloud, which can fetch your youtube video and store it on a cloud storage, such as google drive, ftp, etc. It has API
I have created an app in objective C, In this app i have one page there User can comment live and for the updating comments i am hitting every 5 min the web-service. I have no idea about the server data change .
I want to hit the service while the data has been change to the server.
Is it possible . Or we can use some other way for the web services.
Thanks, Please answer if you have an correct way to solve it.
Go to this web site PubNub, download SDK's for both Server and Objective-C. PubNub is a common Stream Service with Subscribe/Publish services. After implementing SDK's, make your Client as Subscriber, and make your Server as Publisher. Simply; Subscribers are listening channels for data. When you have a new comment, Publish that comment from Server to channel which your client has already subscribed. Do not forget, free accounts are for demo purposes and have limitations.
I have been checking out the Alexa Skills kit the past few days. I have also been poring through the documentations for both the Skills kit and the Voice Service. I am just having a little hiccup trying to understand the flow. I have implemented one of amazon's sample skills (favourite colour sample) in the developer console and also wrote a sample lambda function to handle the type of response that will be delivered. Its working on the test simulator and what left is basically getting lambda running through my ios app. However I have the impression that I don't have to use the voice service. Am I wrong? I am quite confused, it would be awesome if anybody who has some more clarity could shed some light on the matter. If I get lambda working also, I think it will accept requests that are in a particular format. Where do I have to send the encoded audio to get a json response to send to the skills kit? To the Alexa Voice Service?
Also I am authenticating my app using cognito and dynamo db. If I were to use Alexa Voice Service, then it is mentioned that the user will have to also login to amazon. So do I still have to work with the login with amazon sdk? Or is there a workaround?
Based on Amazon documentation there are two ways to interact with Alexa:
Sounds like you want to implement the app thru the Companion method.
As far as the JSON goes, i am currently resolving that issue now, (will post answer once I have it resolved).
Basically you have to use AVFoundation to capture audio from iPhone and send 2 https messages to Alexa (One message with JSON Body & the second message with audio captured as body.) Bases on Documentation
Companion App
(You have a device (such as a smart speaker) that you want to add Alexa to. So, you build in support for AVS. Great! Now you need a way to authorize it and associate it with the user's account. This is the "companion app" approach. The companion app connects to your smart product and allows the user to login and authorize the speaker to use Alexa and connect to their Amazon account.)
Mobile OR Website
AVS App
(You don't have a device you need to authorize - instead you want to speak to Alexa from within your Android/Iphone application.)
Android or Iphone
You can find a swift example on github on how to implement a iOS AVS client
https://github.com/chintan1891/iOS-Alexa
I am trying to develop a live-streaming application like the meerkat app, where user A can broadcast a live stream while other users are able to watch it. I am having trouble understanding the architecture and mechanisms used to upload video to a server. Currently, I am using a dedicated server with FFMPEG installed on it. I also know FFServer can be used to perform RTSP communication, but I am still unclear how to do this. Can anyone guide me on this?
I would like to know how to upload videos to a server or whether there is another way to perform a live stream. Open source frameworks are welcome.
for Live streaming video/audio http://www.wowza.com/ give you the best functionality. you have to set up your server in WOWZA also you cant test in that.
for IOS you can broadcast and receive from the below demo you can download from here
i think it's helpful to you :)
Well i was in search of something open source which can be implemented without any additional cost. Luckily found Red5 Server (Open Source) https://github.com/Red5/red5-server
I had configured it on my dedicated server and running perfectly fine. Now as server side issue is solved. I need something thing to work on iOS side. For that also i found https://github.com/slavavdovichenko/MediaLibDemos3x
So with the combination of this two repos i was able to make an live streaming app like meerkut
Thanks