send UISlider values using Multipeer Connectivity wifi/bluetooth - ios

Im starting to create music production applications and the Multipeer Connectivity can come in handy!!!
I can set up a connection between two iDevices. But my goal is to send UISlider values from one device to another where they will go straight into my sound engine on the host device).
Should I use an NSStream or just send NSData (perhaps using MCSessionSendDataUnreliable)?
And if NSData then when? should I attach a selector for UIControlEventValueChanged???
I'm having trouble with everything at the moment on this one task...
The multipeer connectivity framework seems awesome, and I think many people could use this

All of your ideas are spot on. The choice between NSStream and NSData will come down to how frequently the sliders are updated. Since you'll be doing music production, time synchronization will be crucial (especially if you are doing any tasks with MIDI).
If time and latency is indeed a factor, I would recommend going the NSStream route, and then routing all packets through that stream. You can easily implement this idea using NSData and then determine if the latency is an issue.
Roughly what you will need to do, is package up the data you want to transmit as an NSData and send it over the wire. You have a two options here: You can either create C structs and initialize data from pointers to those structs. Or you can create an NSObject subclass that conforms to the NSSecureCoding protocol. Then use NSKeyedArchiver and NSKeyedUnarchiver to convert the class to an NSData.

Related

How to transfer Video data using Adaptive Autosar

I am working on Adaptive Autosar project, where input data (video) captured from camera sensor needs to be transferred from client machine to server machine, which runs object detection algorithm.
someIP (service oriented middleware over IP) protocol is used as middleware .
Is it possible to share video file using SomeIP protocol?
If No, what is other method to share the video frame?
Thanks & Regards
Astha Mishra
The problem would be, that you would need a very good connection between the two ECUs, and I doubt that even with Ethernet, you can pass the data that fast to keep a certain performance. It might make sense to preprocess the data, before transmitting it somewhere else.
Transmission would be done rather as a byte stream with a streaming protocol, e.g. SomeIpTp, you might think about compression if possible. UDP instead of TCP might be also a good idea, but consider the possible drawbacks of UDP.
Vector seems to provide some MICROSAR module called MICROSAR.AVB module for Audio/Video-Briding.
But make sure, the sensor/camera does not provide the data faster than it can push it out over the network.

how to pass samplebuffer of processSampleBuffer from app Extension to controller in ios

struggling to pass ProcessSampleBuffers sample buffer from app extension to controller i have tried to pass buffers through delegate but its not working any answer regarding these will be helpful
You can do it using App Groups. Check this article to understand how it works. This solution is based on BBPortal framework. I need to inform you, that this is not the best solution, because transferring of samplebuffers will depend on write/read speed of device flash memory. I've implemented this solution in my app and it was quite slow, so changed functionality to send samplebuffers on server because it is much more faster.
You can create a library/framework that manages both your app and your extension's IO. Then send buffers from your extension to your framework after processing them you can inform your container app. For example,
MyExtension.appex: Sends buffers to your common framework via delegation or KVO(More Expensive)
MyCommonFramework.framework: Receives the buffer and its type then renders with Core Image after its processed passed to Main App.
MyApp.app: Can hold a reference to CommanFramework's buffer processor classes, via delegation or KVO will receive the processed buffers or video/audio itself.

Send audio buffer in parts to a server from an iOS device's microphone

I am trying to build an iOS application that streams audio coming directly from the input (or mic) of a device. What I am thinking is that every certain period of time, I'd have to send the audio buffer to the server, so that the server sends it to another client that might want to listen. I am planning to use WebSockets for the server-side implementation.
Is there a way to grab just a specific stream of buffer from the input (mic) of the iOS device and send it to the server while the user speaks another bit and so on and so forth? I am thinking that if I could start an AVAudioRecorder perhaps with AVAudioEngine and record every 1 second or half a second, but I think that that would create too much of a delay and possibly lost streams in the transition process.
Is there a better way to accomplish this? I am really interested in understanding the science behind it. If this is not the best approach please tell me which one it is and maybe a basic idea for its implementation or something that could point me in the right direction.
I found the answer to my own question!! The answer lies in the AVFoundation framework, specifically AVCaptureAudioDataOutput and its delegate that will send you a buffer as soon as the input source captures it.

Can i use TDAudioStreamer with GCDAsyncSocket or NetService?

I want to stream audio file between to multiple device, one act as server and other as client on the local network, I found https://github.com/tonyd256/TDAudioStreamer this class that stream audio to connected client but it use multiplier connectivity. I wonder that, can I use this class with GCDAsyncSocket or NetService if there is any way?
as my experience you have to pass stream object to this classes in order to work with GCDAsyncSocket or NetService. Then after client side receiving data from server you have to convert that data in proper format to play.
That's all i know.

Playing back a WAV file streamed gradually over a network connection in iOS

I'm working with a third party API that behaves as follows:
I have to connect to its URL and make my request, which involves POSTing request data;
the remote server then sends back, "chunk" at a time, the corresponding WAV data (which I receive in my NSURLConnectionDataDelegate's didReceiveData callback).
By "chunk" for argument's sake, we mean some arbitrary "next portion" of the data, with no guarantee that it corresponds to any meaningful division of the audio (e.g. it may not be aligned to a specific multiple of audio frames, the number of bytes in each chunk is just some arbitrary number that can be different for each chunk, etc).
Now-- correct me if I'm wrong, I can't simply use an AVAudioPlayer because I need to POST to my URL, so I need to pull back the data "manually" via an NSURLConnection.
So... given the above, what is then the most painless way for me to play back that audio as it comes down the wire? (I appreciate that I could concatenate all the arrays of bytes and then pass the whole thing to an AVAudioPlayer at the end-- only that this will delay the start of playback as I have to wait for all the data.)
I will give a bird's eye view to the solution. I think that this will help you a great deal in the direction to find a concrete, coded solution.
iOS provides a zoo of audio APIs and several of them can be used to play audio. Which one of them you choose depends on your particular requirements. As you wrote already, the AVAudioPlayer class is not suitable for your case, because with this one, you need to know all the audio data in the moment you start playing audio. Obviously, this is not the case for streaming, so we have to look for an alternative.
A good tradeoff between ease of use and versatility are the Audio Queue Services, which I recommend for you. Another alternative would be Audio Units, but they are a low level C API and therefor less intuitive to use and they have many pitfalls. So stick to Audio Queues.
Audio Queues allow you to define callback functions which are called from the API when it needs more audio data for playback - similarly to the callback of your network code, which gets called when there is data available.
Now the difficulty is how to connect two callbacks, one which supplies data and one which requests data. For this, you have to use a buffer. More specifically, a queue (don't confuse this queue with the Audio Queue stuff. Audio Queue Services is the name of an API. On the other hand, the queue I'm talking about next is a container object). For clarity, I will call this one buffer-queue.
To fill data into the buffer-queue you will use the network callback function, which supplies data to you from the network. And data will be taken out of the buffer-queue by the audio callback function, which is called by the Audio Queue Services when it needs more data.
You have to find a buffer-queue implementation which supports concurrent access (aka it is thread safe), because it will be accessed from two different threads, the audio thread and the network thread.
Alternatively to finding an already thread safe buffer-queue implementation, you can take care of the thread safety on your own, e.g. by executing all code dealing with the buffer-queue on a certain dispatch queue (3rd kind of queue here; yes, Apple and IT love them).
Now, what happens if either
The audio callback is called and your buffer-queue is empty, or
The network callback is called and your buffer-queue is already full?
In both cases, the respective callback function can't proceed normally. The audio callback function can't supply audio data if there is none available and the network callback function can't store incoming data if the buffer-queue is full.
In these cases, I would first try out blocking further execution until more data is available or respectively space is available to store data. On the network side, this will most likely work. On the audio side, this might cause problems. If it causes problems on the audio side, you have an easy solution: if you have no data, simply supply silence as data. That means that you need to supply zero-frames to the Audio Queue Services, which it will play as silence to fill the gap until more data is available from the network.
This is the concept that all streaming players use when suddenly the audio stops and it tells you "buffering" next to some kind of spinning icon indicating that you have to wait and nobody knows for how long.

Resources