I'm using AudioKit 4.01 on iOS.
After using AKAudioPlayer.replace(file:), the AKAudioPlayer doesn't loop anymore correctly : it doesn't take the value "endTime" into account anymore, the file plays until the end. Is that a bug or something I missed?
You say that AudioKit doesn't take into account endTime any more after you replace the file. AKAudioPlayer resets its endTime when reloadFile() or replace(file:) is called.
Call hierarchy as of 4.0.1 :
replace(file:) -> reloadFile() -> initialize()
initialize() sets endingFrame = totalFrameCount, which is what endTime is calculated from.
Solution is to set the endTime again after replace(file:)
Related
Trying to use AudioKit 5 to dynamically create a player with a mixer, and attach it to a main mixer. I'd like the resulting chain to look like:
AudioPlayer -> Mixer(for player) -> Mixer(for output) -> AudioEngine.output
My example repo is here: https://github.com/hoopes/AK5Test1
You can see in the main file here (https://github.com/hoopes/AK5Test1/blob/main/AK5Test1/AK5Test1App.swift) that there are three functions.
The first works, where an mp3 is played on a Mixer that is created when the controller class is created.
The second works, where a newly created AudioPlayer is hooked directly to the outputMixer.
However, the third, where I try to set up the chain above, does not, and crashes with the "player started when in a disconnected state" error. I've copied the function here:
/** Try to add a mixer with a player to the main mixer */
func doesNotWork() {
let p2 = AudioPlayer()
let localMixer = Mixer()
localMixer.addInput(p2)
outputMixer.addInput(localMixer)
playMp3(p: p2)
}
Where playMp3 just plays an example mp3 on the AudioPlayer.
I'm not sure how I'm misusing the Mixer. In my actual application, I have a longer chain of mixers/boosters/etc, and getting the same error, which led me to create the simple test app.
If anyone can offer advice, I'd love to hear it. Thanks so much!
In your case, you can swap outputMixer.addInput(localMixer) and localMixer.addInput(p2) then it works
Once you have started the engine: work backwards from the output with your audio chain connections. So, your problem was that you attached a player to a mixer that was disconnected from the output. You needed to first attach the output to the mixer and then attach that mixer to the player.
The advice I wound up getting from an AudioKit contributor was to do everything possible to create all audio objects that you need up front, and dynamically change their volume to "connect" and "disconnect", so to speak.
Imagine you have a piano app (a contrived example, but hopefully gets the point across) - instead of creating a player when a key is pressed, connecting it, and disconnecting/disposing when the note is complete, create a player for every key on startup, and deal with them dynamically - this prevents any weirdness with "disconnected state", etc.
This has been working pretty flawlessly for me since.
Note that this is NOT a duplicate of this SO Post because in that post only WHAT method to use is given but there's no example on HOW should I use it.
So, I have dug into AKOfflineRenderNode as much as I can and have viewed all examples I could find. However, my code never seemed to work correctly on iOS 10.3.1 devices(and other iOS 10 versions), for the result is always silent. I try to follow examples provided in other SO posts but no success. I try to follow that in SongProcessor but it uses an older version of Swift and I can't even compile it. Trying SongProcessor's way to use AKOfflineRenderNode didn't help either. It always turned out silent.
I created a demo project just to test this. Because I don't own the audio file I used to test with, I couldn't upload it to my GitHub. Please add an audio file named "Test" into the project before compiling onto an iOS 10.3.1 simulator. (And if your file isn't in m4a, remember to change the file type in code where I initialize AKPlayer)
If you don't want to download and run the sample, the essential part is here:
#IBAction func export() {
// url, player, offlineRenderer and others are predefined and connected as player >> aPitchShifter >> offlineRenderer
// AudioKit.output is already offlineRenderer
offlineRenderer.internalRenderEnabled = false
try! AudioKit.start()
// I also tried using AKAudioPlayer instead of AKPlayer
// Also tried getting time in these ways:
// AVAudioTime.secondsToAudioTime(hostTime: 0, time: 0)
// player.audioTime(at: 0)
// And for hostTime I've tried 0 as well as mach_absolute_time()
// None worked
let time = AVAudioTime(sampleTime: 0, atRate: offlineRenderer.avAudioNode.inputFormat(forBus: 0).sampleRate)
player.play(at: time)
try! offlineRenderer.renderToURL(url, duration: player.duration)
player.stop()
player.disconnectOutput()
offlineRenderer.internalRenderEnabled = true
try? AudioKit.stop()
}
Does someone know how to change WebRTC (https://cocoapods.org/pods/libjingle_peerconnection) video source?
I am working on an screen sharing app.
At the moment, I retrieve the rendered frames in real-time in CVPixelBuffer. Does someone know how I could add my frames as video source please?
Is it possible to set an other video source instead of camera device source ? Is yes, which format the video has to be and how to do it ?
Thanks.
var connectionFactory : RTCPeerConnectionFactory = RTCPeerConnectionFactory()
let videoSource : RTCVideoSource = factory.videoSource()
videoSource.capturer(videoCapturer, didCapture: videoFrame!)
Mounis answer is wrong. This leads to nothing. At least not at the time of this writing. There is simply nothing happening.
In fact, you would need to satisfy this delegate
- (void)capturer:(RTCVideoCapturer *)capturer didCaptureVideoFrame:(RTCVideoFrame *)frame;
(Note the difference to the Swift version: didCapture vs. didCaptureVideoFrame)
Since this delegate is for unclear reasons not available at Swift level (the compiler says you have to use didCapture, since it has been renamed from didCaptureVideoFrame with Swift3) you have to put the code int an ObjC class. I did copy this and this (which is a part of this sample project)into my project, made my videoCapturer an instance of ARDBroadcastSampleHandler
self.videoCapturer = ARDExternalSampleCapturer(delegate: videoSource)
and within the capture callback I'm calling it
let capturer = self.videoCapturer as? ARDExternalSampleCapturer
capturer?.didCapture(sampleBuffer)
I am trying to create an app similar to the Reactable.
The user will be able to drag "modules" like an oscillator or filter from a menu into the "play area" and the module will be activated.
I am thinking to initialize the modules as they intersect with the "play area" background object. However, this requires me to name the modules automatically, i.e.:
let osci = AKOscillator()
where osci will automatically count up to be:
let osci1 = AKOscillator()
let osci2 = AKOscillator()
...
etc.
How will I be able to do this?
Thanks
edit: I am trying to use an array by creating an array of
var osciArray = [AKOscillator]()
and in my function to add an oscillator, this is my code:
let oscis = AKOscillator()
osciArray.append(oscis)
osciArray[oscCounter].frequency = freqValue
osciArray[oscCounter].amplitude = 0.5
osciArray[oscCounter].start()
selectedNode.userData = ["counter": oscCounter]
oscCounter += 1
currentOutput = osciArray[oscCounter]
AudioKit.output = currentOutput
AudioKit.start()
My app builds fine, but once the app starts running on the Simulator I get error : fatal error: Index out of range
I haven't used AudioKit, but I read about it a while ago and I have quite a big interest in it. From what I understand from the documentation, it's structured pretty much like SpriteKit: nodes connected together.
I guess then that most classes in the library derive from a base class, just like everything in SpriteKit derives from the SKNode class.
Since you are linking the audio kit nodes with visual representations via SpriteKit nodes, why don't you simply subclass from an SKSpriteNode and add an optional audioNode property with the base class from AudioKit?
That way you can just use SpriteKit to interact directly with the stored audio node property.
I think there's a lot of AudioKit related code in your question, but to answer the question, you only have to look at oscCounter. You don't show its initial value, but I am guessing it was zero. Then you increment it by 1 and try to access osciArray[oscCounter] which has only one element so it should be accessed by osciArray[0]. Move the counter lower and you'll be better off. Furthermore, your oscillators look like local variables, so they'll get lost once the scope is lost. They should be declared as instance variables in your class or whatever this is part of.
I'm trying to change the time signature (default to 4/4) in a MusicSequence but I don't seem to understand how to do this. I have 2 MusicTracks inside the sequence and a MusicPlayer also to reproduce the music. How can I change this value?
EDIT:
I know now that I need to add a Time Sig event to the MusicSequence Tempo Track. I know that I can get this track with MusicSequenceGetTempoTrack, but how do I add a time sig event to it?
EDIT 2:
Researching I realized that i need to create an MusicTrackExtendedMetaEvent to the Music Tempo Track. Now I need to know how to correctly format MIDIMetaEvent (I know that 88 is the metaEventType but don't know how to add the rest of the information).
After 4 wasting 4 hours on this I figured out how to do it. Here the code:
//Getting the tempo track
MusicTrack tempoTrack;
MusicSequenceGetTempoTrack (musicSequence, &tempoTrack);
//Set time signature to 7/16
MIDIMetaEvent timeSignatureMetaEvent;
timeSignatureMetaEvent.metaEventType = 0x58;
timeSignatureMetaEvent.dataLength = 4;
timeSignatureMetaEvent.data[0] = 0x07;
timeSignatureMetaEvent.data[1] = 0x04;
timeSignatureMetaEvent.data[2] = 0x18;
timeSignatureMetaEvent.data[3] = 0x08;
MusicTrackNewMetaEvent(tempoTrack, 0, &timeSignatureMetaEvent);
Here's a reference to MIDI file spec to look up time signature codes to http://www.somascape.org/midi/tech/mfile.html