already implemented webRTc video call in ios. I want to add AR to a video call. eg: suppose user draw aline in his local view remote user can see it.
I'm working on an open source app that does exactly this using Agora's network instead of webRTC. It works pretty well as a proof of concept.
Two devices connect to the same channel, one as a host and one as the support. When the support draws on the screen it adds a line in 3D for the other.
This link goes straight to the latest commit, as it's not all merged to the default branch yet:
https://github.com/AgoraIO-Community/AR-Remote-Support/tree/5a0b3a3c7c32b05e0548831bd80007ea624f8851
Related
Is there any way to implement an iOS app that has access to the screen (e.g. screen recording) also when it's backgrounded? Has anyone experience with this?
Apps like TeamViewer do this, but it's not clear to me if they went through a special process with Apple (e.g. a non-open API).
P.S. I am of course assuming that the user would have to explicitly accept this (e.g. like for system extensions on macOS), the goal here is not to make a malicious app but a remote-control tool.
The only way to record the screen in the background is by using the broadcast upload extension in ReplayKit 2. This WWDC talk goes into more detail around how to use this API https://developer.apple.com/videos/play/wwdc2018/601/
Since it's not specifically designed for your use case you will have to do some things differently like locally storing the frames in your App Group instead of uploading them.
I have integrated twilio programmable video in my sample app.
I have 2 apps and both joins the same room.
On the publishing side it works just fine. I am using AR camera instead of a normal one. I am able to view the preview of the other person and audio also works fine.
But on the receiver side, the remote view seems to be blank. It doesn't load the other person's back camera view itself. The same code was working before and it suddenly stopped working.
Please find the swift file that has the receiver code in the below url,
https://www.dropbox.com/s/j0uxt3cv5iqznc0/ARHelpViewController.swift?dl=0
Twilio developer evangelist here.
When you subscribe to a TVIRemoteVideoTrack, you also must wait for the subscribedToVideoTrack:publication:forParticipant: callback to confirm that you are truly subscribed to the video track and that the data will then be forthcoming.
You can also query hasVideoData to determine whether frames have been received for that view already.
I also believe that a known limitation in the current implementation of TVIVideoView is that if you reuse a view by adding it as a renderer to a different TVIVideoTrack, the 1hasVideoData property will not be reset and no videoViewDidReceiveData: will be sent. The work around for that is to make a new TVIVideoView for any TVIVideoTracks that you wish to render.
I'd also maybe recommend checking out this blog post on ARKit with Twilio Video or this blog post on ARKit with Twilio Video and the Data Tracks API.
I need to develop an app, that uses Vuforia cloud recognition of object, and then display a video on top of that object. This video file needs to be downloaded from the internet from separate web service, using recognized object identifier. I was looking at Vuforia samples, and was able to configure Cloud Recognition to use correct target manager database - objects are recognized correctly. But I don't know how to do, so that after discovering this object, I would display an loader view, and when video is ready to play, then display this video. I don't know where and what to update in the code. I only found that some local dataset can be used, but I can't use local dataset, because videos I want to display, are supposed to be downloaded from the internet after detection. Can someone direct me, where in Vuforia examples I can update what is shown on the target?
Vuforia cloud is only for markers.
As per your requirements
1. You need to use iOS code to download the video.
2. After that you have to show that video using AVPlayer.
I'd like to stream video from the camera on an iOS device to a receiver via wifi, in effect turning the device into a wireless webcam. Is there a way to build a small app that captures video input on an iOS app and sends it via an RTSP stream or similar?
As this is an ad hoc experiment, I'm not concerned about App Store guidelines and can jailbreak if necessary.
If I interpret your question correctly you more or less need to solve four problems:
Get the camera feed.
Convert/encode this to the right format.
Stream the data.
Prevent the phone from locking itself and going into deep sleep.
The first one is fairly simple and Apple has as always provided good documentation and examples -> API link. Make sure you check out their example in the end as you will get a CMSampleBufferRef data object back.
For the second and third part, you should check out the CFNetwork framework and specially CFFTPStream for streaming using FTP.
If your are only building this for yourself then you can always turn off the Auto-Lock feature in the settings. If you on the other hand would like to distribute this to other users you could use a trick to play a mute sound every 10 seconds. This is more or less how all the alarm clocks work in the App Store. Here's a tutorial. =)
I hope I helped a little bit at least.
Good luck and best regards!
I'm 70% of the way to doing the same thing. Here's how I did it:
Capture content from video input
Chop video into files for use in HTML Live Streaming.
Spin up a web server on the iPhone and make the video files available.
Connect to the IP address of the phone and viola! you've got live streaming video.
Last time I touched the code I was trying to debug my Live Streaming not working. I'll try and get my source code posted on github this weekend, if you'd like to take a look.
I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls. The app would allow the user to enter a mode where the microphone was live and listened for predefined keywords like 'down', 'up', 'next', 'back', 'home', etc.
I don't want to reinvent the wheel on this so I'm just wondering first, if someone has done this already and if not, are there any good tutorials or SDKs available to help with recording someone's voice, and then comparing future output to see if it matches, or just dealing with the microphone in general?
Let's put aside that this is a fairly vaguely worded question for the moment.
If you are expecting to allow voice control in your app that somehow works throughout the entire device, it's just not possible. Your app would only work to control itself -- or at least itself and whatever external hooks you can normally get to the rest of the device, like, say, playing a song out of the user's iTunes library.
If you're planning on doing this in a jailbroken environment, then you should find some open-source library that does voice recognition -- if there are any -- and start from there. Be prepared for a very long haul, though.
Dragon Mobile SDK is what you're looking for.
http://dragonmobile.nuancemobiledeveloper.com/
There maybe others voice recognition SDKs out there, but this is the only one I can think of from the top of my head.
You can find a library called CMU Sphinx. There's an iphone version for it called
PocketSphinx. See if it fits your needs.
I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls.
The iOS 13 new feature Voice Control fully meets your request because you can control your device and your app with your voice exactly the same as with touches.
It's also possible to define actions for some specific words for instance.
The device settings are perfectly well detailed to handle this amazing new feature (Accessibility - Voice Control):
If you need dedicated names to be read out in your app, use the accessibilityUserInputLabels property to define them.
That's definitely the built-in tool your need to reach your goal: no need to use external library or SDK, everything is natively provided. ;o)