I need to develop an app, that uses Vuforia cloud recognition of object, and then display a video on top of that object. This video file needs to be downloaded from the internet from separate web service, using recognized object identifier. I was looking at Vuforia samples, and was able to configure Cloud Recognition to use correct target manager database - objects are recognized correctly. But I don't know how to do, so that after discovering this object, I would display an loader view, and when video is ready to play, then display this video. I don't know where and what to update in the code. I only found that some local dataset can be used, but I can't use local dataset, because videos I want to display, are supposed to be downloaded from the internet after detection. Can someone direct me, where in Vuforia examples I can update what is shown on the target?
Vuforia cloud is only for markers.
As per your requirements
1. You need to use iOS code to download the video.
2. After that you have to show that video using AVPlayer.
Related
already implemented webRTc video call in ios. I want to add AR to a video call. eg: suppose user draw aline in his local view remote user can see it.
I'm working on an open source app that does exactly this using Agora's network instead of webRTC. It works pretty well as a proof of concept.
Two devices connect to the same channel, one as a host and one as the support. When the support draws on the screen it adds a line in 3D for the other.
This link goes straight to the latest commit, as it's not all merged to the default branch yet:
https://github.com/AgoraIO-Community/AR-Remote-Support/tree/5a0b3a3c7c32b05e0548831bd80007ea624f8851
I am orientating on developing an app to synchronize all pictures taken by the iPhone camera.
I searched quite a lot and can't find much about the hardware event for the camera shutter on the iPhone.
Is it possible like the android CAMERA_BUTTON BroadcastReceiver in the manifest, to listen if the camera button is pressed in general, without the app being specifically launched?
Or an overlay on the existing iOS camera app?
Update 02-05-2018
I din't managed to get a direct detection of the camera button, also no ongoing detection from pictures take from the camera(PHPhotoLibraryChangeObserver). When the app is killed, all listeners are also killed. I am however using this when the app is booted up with the locationchange mechanism
In the end I used the Using the Significant-Change Location Service to get the detection of the changed pictures to synchronize, ongoing. I used the NextCloud and OwnCloud as examples, which were containing this part.
Using the Significant-Change Location Service
The capturing of images and videos is an entire software related process managed by classes in the AVFoundation framework. The entire hardware of the iPhone is not accessible for applications and you cannot monitor the use of hardware directly. There are some system frameworks, but these won't help you. The AVFoundation doesn't have any notifications that it will post to registered observers.
All captured images and videos are put in the Photos library and the Photos library has notifications when something changes in the library. You can register your application as an observer for changes in the Photos library and you can specify the changes you want to observer. You can also collect the specific changes that have happened and have your application handle the changes in the Photos Library.
What I don't know is whether you can use this as a remote change and have your app being launched by iOS when it registers for that change in the Photos library. I do know that you can program your app to launch on reception of notifications, but I don't know if this can be done with this change observer. I would suggest to give it a try.
Hope this helps.
I am Developing live broadcasting feature, i have built Custom camera to shoot video using AVCaptureSession, and we have Wowza server for broadcasting,
So my Question is how to Encode Video from AVCaptureFileOutputRecordingDelegate,AVCaptureVideoDataOutputSampleBufferDelegate and send to Server, I found many libraries, but not suitable for our application, they provide their own UI, Can any one Suggest any other library or Step by step Integration
Are you using the AVAssetWriterInput
init mediaType:outputSettings:sourceFormatHint: method?. This takes a dictionary with the desired settings. From the docs..."Specify a dictionary containing the settings used for encoding the media appended to the output. You may pass nil for this parameter if you do not want the appended samples to be re-encoded."
I am new to iOS development and need to make a change to an iOS app I'm taking over to add video to a tweet. My current app UI allows the user to type in text for a tweet but I would be changing that to allow them to pick a video to upload along with the tweet similar to how the Twitter app works.
I see the Twitter API supports uploading video but I haven't been able to find any good examples on how to accomplish this using XCode and Objective-C. Any recommended approaches or tool kits I can leverage to accomplish this?
https://dev.twitter.com/rest/public/uploading-media
I had to roll my own solution. Check out my project https://github.com/mtrung/TwitterVideoUpload.
Light-weight due to using built-in Apple's Social framework to keep things light. No need to add extra frameworks such as TwitterKit and Fabric.
Support chunk upload.
Built-in support for user's credential retrieval
Thanks for the -1 that was helpful. So thought I found the answer with Fabric (https://get.fabric.io/). The Android side supports image and video upload with a tweet but the iOS side does not (image only). It looks like you have to roll your own solution including building a video picker. Then you can use the Twitter REST API to upload the video. Not exactly what I was hoping for but it is doable.
This link shows Objective C and Swift code to do the video upload Share video on Twitter with Fabric API without composer iOS.
I'd like to stream video from the camera on an iOS device to a receiver via wifi, in effect turning the device into a wireless webcam. Is there a way to build a small app that captures video input on an iOS app and sends it via an RTSP stream or similar?
As this is an ad hoc experiment, I'm not concerned about App Store guidelines and can jailbreak if necessary.
If I interpret your question correctly you more or less need to solve four problems:
Get the camera feed.
Convert/encode this to the right format.
Stream the data.
Prevent the phone from locking itself and going into deep sleep.
The first one is fairly simple and Apple has as always provided good documentation and examples -> API link. Make sure you check out their example in the end as you will get a CMSampleBufferRef data object back.
For the second and third part, you should check out the CFNetwork framework and specially CFFTPStream for streaming using FTP.
If your are only building this for yourself then you can always turn off the Auto-Lock feature in the settings. If you on the other hand would like to distribute this to other users you could use a trick to play a mute sound every 10 seconds. This is more or less how all the alarm clocks work in the App Store. Here's a tutorial. =)
I hope I helped a little bit at least.
Good luck and best regards!
I'm 70% of the way to doing the same thing. Here's how I did it:
Capture content from video input
Chop video into files for use in HTML Live Streaming.
Spin up a web server on the iPhone and make the video files available.
Connect to the IP address of the phone and viola! you've got live streaming video.
Last time I touched the code I was trying to debug my Live Streaming not working. I'll try and get my source code posted on github this weekend, if you'd like to take a look.