Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Im developing an iOS App that ideally will provide a video chat functionality
Currently i've managed to make it work inside a wifi network, using AVCaptureOutput, Bonjour, NSNetServices, CFSocketStreams and NSStreams. Having 2 iOS devices (client and server) connected to the same wifi.
What i want to achieve is having the connection over my dedicated server, and not over a local wifi network. So the 2 or more devices can use 3G as well, LTE and so forth.
I would like to know how i can stream the camera FROM my iPhone TO my remote dedicated server.
I DON'T want to use Wowza as a server, i DON'T want OpenTok or similar tools, i DON'T want HTTP Live Streaming tools from Apple (they're tools for the "SERVER TO IOS and NOT IOS TO SERVER", and they are for media stream only, not real-time camera/mic)
I've also read about CFHTTP requests, NSURLConnections, JSON and HTML5 but i still don't know how they work, or if they are what i need.
Summarizing:
How it's possible to stablish a connection between my iPhone with my remote dedicated Server, and stream the iPhone camera/mic constantly at 30fps in real-time?
The short answer to your question is that Apple doesn't provide a way to do that in iOS - they simply do not offer a direct way to get at the hardware-encoded frames to send out. The longer answer is that you can, but you have to be savvy about iteratively packetizing and sending short segments of to-file hardware-encoded video, and over your preferred protocol.
Once you solve the packetization of hardware encoded frames issue, you have to solve the replication issue (client -> server -> [multiple subscribers]). Since you don't want to use Wowza, and by your intonation, seemingly don't want to use any server that you didn't write, you probably should read up on RTMP and RTSP as you write your own. I can't imagine a situation where I'd want to write my own RTMP server, but I won't judge you. ;-)
Note: I've done exactly what you (seemingly) are trying to do, doing exactly what I described in the first paragraph. I did use RTMP as the streaming protocol, and packetized short segments of h.264 hardware encoded files onto the stream. What I didn't write myself was the replication of the stream to end clients from the server. Use Wowza. Or nginx-rtmp. Or FMS. Anything -- if you really want to write your own, that's your prerogative, but honestly: don't.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
In my application I update the user data when the user logs out or closes the application.
The problem is that when he closes the application, the OS stops all the processes of the application, so I can't do my writing on Firebase.
What I want to do instead is save this data locally on the device and when the user logs back in, do the update.
I was going to save them via User Defaults but I thought that if the user had a jaibreak phone, they could theoretically access that memory area and therefore change values.
Am I getting the wrong idea?
Thanks :)
You are right, normally the sandbox of your app is protected but a super-user can access to it and read data. In this case, one solution is to prevent app-launching on rooted or jailbroken phone. There are some libs like this one to detect jailbroken phone. Some times it better to stop the app and launch a pop-up explaining why the app can't run on this phone because of cybersec rule.
But doing that keep in mind you will lost some users.
To your primary issue, writing data when the user leaves the app, this has several well-supported solutions. This is a canonical example of what beginBackgroundTask(expirationHandler:) is for. Whenever you begin a Firebase update, call beginBackgroundTask, and whenever you finish the update, call endBackgroundTask. That will tell the OS that you're currently performing an action that could benefit from a little more time before being terminated. You should expect something on the order of 30 seconds to a minute. (It used to be more like 3 minutes, but it's been tightened in newer OS versions.) That should be plenty of time for most updates.
If you are using URLSession directly, you can also make use of background tasks. See Downloading Files in the Background for details. This can be used to send data, not just transfer files. It has the major advantage of queuing operations when currently offline, and the OS will perform the transfer when possible, even if your app is no longer running. That said, this is all more complex to implement, and likely overkill for this kind of problem.
That said, if you're storing the access token anywhere in your program (including in memory), a user who reverse engineers your app can always connect to Firebase directly and send anything they want. Whether you store it in UserDefaults, in a file, or just in memory doesn't really change that. Also, last I checked, Firebase doesn't support certificate pinning if you're using their SDK, so a user can just rewrite your packets using a proxy anyway without even jailbreaking the phone.
I think that would be better to store user's data in cloud.
I'm currently trying to create an app which makes use of the ReplayKit Live feature to broadcast the iPhone screen to a service which I want to provide by myself (basically something similar to what Teamviewer QuickSupport is doing).
I do know how to create the extension by itself, however, I do not have enough information on how the best way would be to process these informations and upload it to the backend. I did search a lot but without any success:
https://forums.developer.apple.com/thread/52434 - This is basically the same question I have but sadly without any answer
So my question is: Does anyone have any insight on how the video sample buffer (made available by processSampleBuffer) could be processed and uploaded on a backend without inducing large delays?
Is there any way, using currently available SDK frameworks on Cocoa (touch) to create a streaming solution where I would host my mp4 content on some server and stream it to my iOS client app?
I know how to write such a client, but it's a bit confusing on server side.
AFAIK cloudKit is not suitable for that task because behind the scenes it keeps a synced local copy of datastore which is NOT what I want. I want to store media content remotely and stream it to the client so that it does not takes precious space on a poor 16 GB iPad mini.
Can I accomplish that server solution using Objective-C / Cocoa Touch at all?
Should I instead resort to Azure and C#?
It's not 100% clear why would you do anything like that?
If you have control over the server side, why don't you just set up a basic HTTP server, and on client side use AVPlayer to fetch the mp4 and play it back to the user? It is very simple. A basic apache setup would do the job.
If it is live media content you want to stream, then it is worth to read this guide as well:
https://developer.apple.com/Library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/StreamingMediaGuide.pdf
Edited after your comment:
If you would like to use AVPlayer as a player, then I think those two things don't fit that well. AVPlayer needs to buffer different ranges ahead (for some container formats the second/third request is reading the end of the stream). As far as I can see CKFetchRecordsOperation (which you would use to fetch the content from the server) is not capable of seeking in the stream.
If you have your private player which doesn't require seeking, then you might be able to use CKFetchRecordsOperation's perRecordProgressBlock to feed your player with data.
Yes, you could do that with CloudKit. First, it is not true that CloudKit keeps a local copy of the data. It is up to you what you do with the downloaded data. There isn't even any caching in CloudKit.
To do what you want to do, assuming the content is shared between users, you could upload it to CloudKit in the public database of your app. I think you could do this with the CloudKit web interface, but otherwise you could create a simple Mac app to manage the uploads.
The client app could then download the files. It couldn't stream them though, as far as I know. It would have to download all the files.
If you want a streaming solution, you would probably have to figure out how to split the files into small chunks, and recombine them on the client app.
I'm not sure whether this document is up-to-date, but there is paragraph "Requirements for Apps" which demands using HTTP Live Streaming if you deliver any video exceeding 10min. or 5MB.
I have tried to use LiVu and Broadcast Me, but it does not work smoothly with what I am trying to do. I need to live stream audio/video from the iPhone to our servers (while saving locally).
I have tried to implement a RTSP UDP stream but it is proving to be more of a challenge than we initially thought.
RTSP/UDP is preferred, but whatever gets the stream to the servers in a timely fashion will work.
Any advice or framework suggestions would really help. Have already looked at iOS-RTMP-Library but its too expensive for us to use at this point.
I don’t know about your budget, but you might check ANGL lib which worked fine for us on RTMP.
A potential client has come to me asking for a an app which will stream a six hour audio file. The user needs to be able to set the "playback head" to any position along the file. Presumably, this means that the app must not be forced to download the entire file before it beings playing back starting at an arbitrary
An added complication -- there are actually four files which need to be streamed and mixed simultaneously.
My questions are:
1) Is there an out-of-the box technology which will allow me random access of streaming audio, on iOS? Can this be done with standard server technology and a single long file, or will it involve some fancy server tech?
2) Which iOS framework is best suited for this. Is there anything high-level that would allow me to easily mix these four audio files?
3) Can this be done entirely with standard browser technology on the client side? (i.e. HTML5)
Have a close look at the MP3 format. It is remarkably easy and efficient to parse, chop up into little bits, and reassemble into a custom stream.
Hence rolling your own server-side code to grab what you want and send to the client will not be as crazy or difficult as it may sound.
MP3 is also widely supported by various clients. I strongly suspect any HTML5 capable browser will be able of play the stream you generate via a long-lived bit-rate regulated HTTP request.